\n",
"Currently, this functionality requires a Python environment with a newer version of the ase library than the one
\n",
"which is used by the installation of pynxtools (which is currently ase==3.19.0). Instead, ase>=3.22.1 should be used.
\n",
"The issue with the specific functionalities used in the *create_reconstructed_positions* function is that when using
\n",
@@ -448,12 +465,14 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "
\n",
- "This functionality uses recent features of ase which demands an environment that is currently not supported
\n",
+ "
\n",
+ "This functionality uses recent features of ase which demands an environment that is not necessarily supported
\n",
"by NOMAD OASIS. As the here exemplified settings for this example are configured to represent an environment
\n",
- "matching close to NOMAD users who are interested in this developer functionality should do the following:
\n",
+ "matching one which is close to NOMAD, users who are interested in this dev functionality should do the following:
\n",
"Run this example in a standalone environment where ase is upgraded to the latest version and then use
\n",
"the generated NeXus files either as is or upload them to NOMAD OASIS.
\n",
+ "If the above-mentioned cell detects e.g. that a recent version of ase was installed
\n",
+ "(e.g. >3.22.x) then the code in the following cell can be executed without issues.
\n",
"
"
]
},
@@ -465,7 +484,7 @@
},
"outputs": [],
"source": [
- "# ! dataconverter --reader apm --nxdl NXapm --input-file synthesize1 --output apm.case0.nxs"
+ "! dataconverter --reader apm --nxdl NXapm --input-file synthesize1 --output apm.case0.nxs"
]
},
{
@@ -496,7 +515,7 @@
"metadata": {},
"source": [
"### Contact person for the apm reader and related examples in FAIRmat:\n",
- "Markus Kühbach, 2023/05
\n",
+ "Markus Kühbach, 2023/08/31
\n",
"\n",
"### Funding\n",
"
FAIRmat is a consortium on research data management which is part of the German NFDI.
\n",
@@ -527,7 +546,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.8.16"
+ "version": "3.10.12"
}
},
"nbformat": 4,
diff --git a/examples/apm/apm.oasis.specific.yaml b/examples/apm/apm.oasis.specific.yaml
new file mode 100644
index 000000000..82394f07e
--- /dev/null
+++ b/examples/apm/apm.oasis.specific.yaml
@@ -0,0 +1 @@
+location: Leoben
diff --git a/examples/apm/eln_data_apm.yaml b/examples/apm/eln_data_apm.yaml
index 11e29ced4..ddd67ebcf 100644
--- a/examples/apm/eln_data_apm.yaml
+++ b/examples/apm/eln_data_apm.yaml
@@ -1,82 +1,118 @@
atom_probe:
analysis_chamber_pressure:
unit: torr
- value: 1.0e-10
+ value: 2.0e-10
control_software_program: IVAS
- control_software_program__attr_version: 3.6.4
- fabrication_capabilities: n/a
- fabrication_identifier: n/a
+ control_software_program__attr_version: 3.6.8
+ fabrication_identifier: '12'
fabrication_model: LEAP3000
- fabrication_vendor: AMETEK/Cameca
+ fabrication_vendor: Cameca
+ field_of_view:
+ unit: nm
+ value: 20.
flight_path_length:
unit: m
- value: 0.9
- instrument_name: LEAP 3000
- ion_detector_model: cameca
- ion_detector_name: none
+ value: 1.2
+ instrument_name: LEAP
+ ion_detector_model: n/a
+ ion_detector_name: n/a
ion_detector_serial_number: n/a
ion_detector_type: mcp_dld
- local_electrode_name: electrode 1
+ local_electrode_name: L1
+ location: Denton
pulser:
- laser_source_name: laser
- laser_source_power:
- unit: W
- value: 2.0e-08
- laser_source_pulse_energy:
- unit: J
- value: 1.2e-11
- laser_source_wavelength:
- unit: m
- value: 4.8e-07
- pulse_fraction: 0.1
+ laser_source:
+ - name: laser1
+ power:
+ unit: nW
+ value: 24.0
+ pulse_energy:
+ unit: pJ
+ value: 24.0
+ wavelength:
+ unit: nm
+ value: 355.0
+ - name: laser2
+ power:
+ unit: nW
+ value: 12.0
+ pulse_energy:
+ unit: pJ
+ value: 12.0
+ wavelength:
+ unit: nm
+ value: 254.0
+ pulse_fraction: 0.8
pulse_frequency:
unit: kHz
- value: 250
- pulse_mode: laser
+ value: 250.0
+ pulse_mode: laser_and_voltage
reflectron_applied: true
- specimen_monitoring_detection_rate: 0.6
+ specimen_monitoring_detection_rate: 0.8
specimen_monitoring_initial_radius:
unit: nm
- value: 30
+ value: 12.0
specimen_monitoring_shank_angle:
unit: °
- value: 5
+ value: 5.0
stage_lab_base_temperature:
unit: K
- value: 30
+ value: 20.0
status: success
entry:
- attr_version: nexus-fairmat-proposal successor of 9636feecb79bb32b828b1a9804269573256d7696
- definition: NXapm
- end_time: '2022-09-22T20:00:00+00:00'
- experiment_description: some details for nomad, ODS steel precipitates for testing
- a developmental clustering algorithm called OPTICS.
- experiment_identifier: R31-06365-v02
+ experiment_description: '
Normal
+
+
Bold
+
+
Italics
'
+ experiment_identifier: Si test
+ start_time: '2023-06-11T11:20:00+00:00'
+ end_time: '2023-06-11T11:20:00+00:00'
+ run_number: '2121'
operation_mode: apt
- program: IVAS
- program__attr_version: 3.6.4
- run_number: '6365'
- start_time: '2022-09-20T20:00:00+00:00'
ranging:
program: IVAS
- program__attr_version: 3.6.4
+ program__attr_version: 3.6.8
reconstruction:
crystallographic_calibration: n/a
- parameter: kf = 1.8, ICF = 1.02, Vat = 60 at/nm^3
+ parameter: kf = 1.8, icf = 3.3
program: IVAS
- program__attr_version: 3.6.4
- protocol_name: cameca
+ program__attr_version: 3.6.8
+ protocol_name: bas
+sample:
+ composition:
+ - Mo
+ - Al 12 +- 3
+ - B 50 ppm +- 12
+ - C 3.6
+ grain_diameter:
+ unit: µm
+ value: 200.0
+ grain_diameter_error:
+ unit: µm
+ value: 50.0
+ heat_treatment_quenching_rate:
+ unit: K / s
+ value: 150.0
+ heat_treatment_quenching_rate_error:
+ unit: K / s
+ value: 10.0
+ heat_treatment_temperature:
+ unit: K
+ value: 600.0
+ heat_treatment_temperature_error:
+ unit: K
+ value: 20.0
specimen:
- atom_types:
- - Fe
- - Cr
- - Y
- - O
- description: ODS steel, i.e. material with Y2O3 dispersoids
- name: ODS-Specimen 1
- preparation_date: '2022-09-12T20:01:00+00:00'
- sample_history: undocumented
- short_title: ODS
+ alias: Si
+ description: '
normal
+
+
bold
+
+
italics
'
+ is_polycrystalline: true
+ name: usa_denton_smith_si
+ preparation_date: '2023-06-11T12:51:00+00:00'
user:
-- name: Jing Wang
-- name: Daniel Schreiber
+- {}
+- {}
diff --git a/examples/ellipsometry/eln_data.yaml b/examples/ellipsometry/eln_data.yaml
index 70b708ef3..f20f75861 100644
--- a/examples/ellipsometry/eln_data.yaml
+++ b/examples/ellipsometry/eln_data.yaml
@@ -5,7 +5,7 @@ Data:
data_software/version: '3.882'
data_type: Psi/Delta
spectrum_type: wavelength
- spectrum_unit: Angstroms
+ spectrum_unit: angstrom
Instrument:
Beam_path:
Detector:
@@ -58,9 +58,6 @@ colnames:
- Delta
- err.Psi
- err.Delta
-definition: NXellipsometry
-definition/@url: https://github.com/FAIRmat-NFDI/nexus_definitions/blob/fairmat/contributed_definitions/NXellipsometry.nxdl.xml
-definition/@version: 0.0.2
derived_parameter_type: depolarization
experiment_description: RC2 scan on 2nm SiO2 on Si in air
experiment_identifier: exp-ID
diff --git a/examples/em_nion/Write.NXem_nion.Example.1.ipynb b/examples/em_nion/Write.NXem_nion.Example.1.ipynb
index af08fdd0e..0d48dea69 100644
--- a/examples/em_nion/Write.NXem_nion.Example.1.ipynb
+++ b/examples/em_nion/Write.NXem_nion.Example.1.ipynb
@@ -88,7 +88,15 @@
"metadata": {},
"outputs": [],
"source": [
- "! wget https://www.zenodo.org/record/7986279/files/ger_berlin_haas_nionswift_multimodal.zip\n",
+ "! wget https://www.zenodo.org/record/7986279/files/ger_berlin_haas_nionswift_multimodal.zip"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
"zp.ZipFile(\"ger_berlin_haas_nionswift_multimodal.zip\").extractall(path=\"\", members=None, pwd=None)"
]
},
@@ -240,7 +248,7 @@
"metadata": {},
"source": [
"### Contact person for the em_nion reader and related examples in FAIRmat:\n",
- "Markus Kühbach, 2023/05
\n",
+ "Markus Kühbach, 2023/08/31
\n",
"\n",
"### Funding\n",
"
FAIRmat is a consortium on research data management which is part of the German NFDI.
\n",
@@ -271,7 +279,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.10.6"
+ "version": "3.10.12"
}
},
"nbformat": 4,
diff --git a/examples/em_om/Write.NXem_ebsd.Example.1.ipynb b/examples/em_om/Write.NXem_ebsd.Example.1.ipynb
index dd62925fb..7f5afeb6e 100644
--- a/examples/em_om/Write.NXem_ebsd.Example.1.ipynb
+++ b/examples/em_om/Write.NXem_ebsd.Example.1.ipynb
@@ -259,11 +259,13 @@
"cell_type": "code",
"execution_count": null,
"metadata": {
+ "scrolled": true,
"tags": []
},
"outputs": [],
"source": [
"#parser-nexus/tests/data/tools/dataconverter/readers/em_om/\n",
+ "import numpy as np\n",
"eln_data_file_name = [\"eln_data_em_om.yaml\"]\n",
"input_data_file_name = [\"PrcShanghaiShi.EBSPs70deg.zip\",\n",
" \"H5OINA_examples_Specimen_1_Map_EDS_+_EBSD_Map_Data_2.h5oina\",\n",
@@ -273,7 +275,7 @@
" \"em_om.case2.nxs\",\n",
" \"em_om.case3e.nxs\",\n",
" \"em_om.case4.nxs\"]\n",
- "for case_id in [4]: # [0, 1, 2, 3]:\n",
+ "for case_id in np.arange(0, 3 + 1):\n",
" ELN = eln_data_file_name[0]\n",
" INPUT = input_data_file_name[case_id]\n",
" OUTPUT = output_file_name[case_id]\n",
@@ -305,10 +307,10 @@
"source": [
"# H5Web(OUTPUT)\n",
"H5Web(\"em_om.case0.nxs\")\n",
- "H5Web(\"em_om.case1.nxs\")\n",
- "H5Web(\"em_om.case2.nxs\")\n",
- "H5Web(\"em_om.case3e.nxs\")\n",
- "H5Web(\"em_om.case4.nxs\")"
+ "# H5Web(\"em_om.case1.nxs\")\n",
+ "# H5Web(\"em_om.case2.nxs\")\n",
+ "# H5Web(\"em_om.case3e.nxs\")\n",
+ "# H5Web(\"em_om.case4.nxs\")"
]
},
{
@@ -338,7 +340,7 @@
"metadata": {},
"source": [
"### Contact person for the apm reader and related examples in FAIRmat:\n",
- "Markus Kühbach, 2023/05
\n",
+ "Markus Kühbach, 2023/08/31
\n",
"\n",
"### Funding\n",
"
FAIRmat is a consortium on research data management which is part of the German NFDI.
\n",
@@ -362,7 +364,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.8.16"
+ "version": "3.10.12"
},
"vscode": {
"interpreter": {
diff --git a/examples/em_spctrscpy/Write.NXem.Example.1.ipynb b/examples/em_spctrscpy/Write.NXem.Example.1.ipynb
index 61b0f33d3..3b57b7f9f 100644
--- a/examples/em_spctrscpy/Write.NXem.Example.1.ipynb
+++ b/examples/em_spctrscpy/Write.NXem.Example.1.ipynb
@@ -239,9 +239,9 @@
"outputs": [],
"source": [
"# H5Web(OUTPUT)\n",
- "# H5Web(\"em_sp.case1.nxs\")\n",
+ "H5Web(\"em_sp.case1.nxs\")\n",
"# H5Web(\"em_sp.case2.nxs\")\n",
- "H5Web(\"em_sp.case3.nxs\")"
+ "# H5Web(\"em_sp.case3.nxs\")"
]
},
{
@@ -305,7 +305,7 @@
"metadata": {},
"source": [
"### Contact person for the apm reader and related examples in FAIRmat:\n",
- "Markus Kühbach, 2023/05
\n",
+ "Markus Kühbach, 2023/08/31
\n",
"\n",
"### Funding\n",
"
FAIRmat is a consortium on research data management which is part of the German NFDI.
\n",
@@ -336,7 +336,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.8.16"
+ "version": "3.10.12"
},
"vscode": {
"interpreter": {
diff --git a/examples/json_map/README.md b/examples/json_map/README.md
new file mode 100644
index 000000000..788cb6890
--- /dev/null
+++ b/examples/json_map/README.md
@@ -0,0 +1,36 @@
+# JSON Map Reader
+
+## What is this reader?
+
+This reader is designed to allow users of pynxtools to convert their existing data with the help of a map file. The map file tells the reader what to pick from your data files and convert them to FAIR NeXus files. The following formats are supported as input files:
+* HDF5 (any extension works i.e. h5, hdf5, nxs, etc)
+* JSON
+* Python Dict Objects Pickled with [pickle](https://docs.python.org/3/library/pickle.html). These can contain [xarray.DataArray](https://docs.xarray.dev/en/stable/generated/xarray.DataArray.html) objects as well as regular Python types and Numpy types.
+
+It accepts any NXDL file that you like as long as your mapping file contains all the fields.
+Please use the --generate-template function of the dataconverter to create a .mapping.json file.
+
+```console
+user@box:~$ dataconverter --nxdl NXmynxdl --generate-template > mynxdl.mapping.json
+```
+##### Details on the [mapping.json](/pynxtools/dataconverter/readers/json_map/README.md#the-mappingjson-file) file.
+
+## How to run these examples?
+
+### Automatically merge partial NeXus files
+```console
+user@box:~$ dataconverter --nxdl NXiv_temp --input-file voltage_and_temperature.nxs --input-file current.nxs --output auto_merged.nxs
+```
+
+### Map and copy over data to new NeXus file
+```console
+user@box:~$ dataconverter --nxdl NXiv_temp --mapping merge_copied.mapping.json --input-file voltage_and_temperature.nxs --input-file current.nxs --output merged_copied.nxs
+```
+
+### Map and link over data to new NeXus file
+```console
+user@box:~$ dataconverter --nxdl NXiv_temp --mapping merge_linked.mapping.json --input-file voltage_and_temperature.nxs --input-file current.nxs --output merged_linked.nxs
+```
+
+## Contact person in FAIRmat for this reader
+Sherjeel Shabih
diff --git a/examples/json_map/merge_copied.mapping.json b/examples/json_map/merge_copied.mapping.json
new file mode 100644
index 000000000..bba897874
--- /dev/null
+++ b/examples/json_map/merge_copied.mapping.json
@@ -0,0 +1,35 @@
+{
+ "/@default": "entry",
+ "/ENTRY[entry]/DATA[data]/current": "/entry/data/current",
+ "/ENTRY[entry]/DATA[data]/current_295C": "/entry/data/current_295C",
+ "/ENTRY[entry]/DATA[data]/current_300C": "/entry/data/current_300C",
+ "/ENTRY[entry]/DATA[data]/current_305C": "/entry/data/current_305C",
+ "/ENTRY[entry]/DATA[data]/current_310C": "/entry/data/current_310C",
+ "/ENTRY[entry]/DATA[data]/temperature": "/entry/data/temperature",
+ "/ENTRY[entry]/DATA[data]/voltage": "/entry/data/voltage",
+ "/ENTRY[entry]/INSTRUMENT[instrument]/ENVIRONMENT[environment]/voltage_controller/calibration_time": "/entry/instrument/environment/voltage_controller/calibration_time",
+ "/ENTRY[entry]/INSTRUMENT[instrument]/ENVIRONMENT[environment]/voltage_controller/run_control": "/entry/instrument/environment/voltage_controller/run_control",
+ "/ENTRY[entry]/INSTRUMENT[instrument]/ENVIRONMENT[environment]/voltage_controller/value": "/entry/instrument/environment/voltage_controller/value",
+ "/ENTRY[entry]/INSTRUMENT[instrument]/ENVIRONMENT[environment]/temperature_controller/calibration_time": "/entry/instrument/environment/temperature_controller/calibration_time",
+ "/ENTRY[entry]/INSTRUMENT[instrument]/ENVIRONMENT[environment]/temperature_controller/run_control": "/entry/instrument/environment/temperature_controller/run_control",
+ "/ENTRY[entry]/INSTRUMENT[instrument]/ENVIRONMENT[environment]/temperature_controller/value": "/entry/instrument/environment/temperature_controller/value",
+ "/ENTRY[entry]/INSTRUMENT[instrument]/ENVIRONMENT[environment]/current_sensor/calibration_time": "/entry/instrument/environment/current_sensor/calibration_time",
+ "/ENTRY[entry]/INSTRUMENT[instrument]/ENVIRONMENT[environment]/current_sensor/run_control": "/entry/instrument/environment/current_sensor/run_control",
+ "/ENTRY[entry]/INSTRUMENT[instrument]/ENVIRONMENT[environment]/current_sensor/value": "/entry/instrument/environment/current_sensor/value",
+ "/ENTRY[entry]/INSTRUMENT[instrument]/ENVIRONMENT[environment]/independent_controllers": ["voltage_controller", "temperature_control"],
+ "/ENTRY[entry]/INSTRUMENT[instrument]/ENVIRONMENT[environment]/measurement_sensors": ["current_sensor"],
+ "/ENTRY[entry]/INSTRUMENT[instrument]/ENVIRONMENT[environment]/NXpid[heating_pid]/description": "/entry/instrument/environment/heating_pid/description",
+ "/ENTRY[entry]/INSTRUMENT[instrument]/ENVIRONMENT[environment]/NXpid[heating_pid]/setpoint": "/entry/instrument/environment/heating_pid/setpoint",
+ "/ENTRY[entry]/INSTRUMENT[instrument]/ENVIRONMENT[environment]/NXpid[heating_pid]/K_p_value": "/entry/instrument/environment/heating_pid/K_p_value",
+ "/ENTRY[entry]/INSTRUMENT[instrument]/ENVIRONMENT[environment]/NXpid[heating_pid]/K_i_value": "/entry/instrument/environment/heating_pid/K_i_value",
+ "/ENTRY[entry]/INSTRUMENT[instrument]/ENVIRONMENT[environment]/NXpid[heating_pid]/K_d_value": "/entry/instrument/environment/heating_pid/K_d_value",
+ "/ENTRY[entry]/PROCESS[process]/program": "Bluesky",
+ "/ENTRY[entry]/PROCESS[process]/program/@version": "1.6.7",
+ "/ENTRY[entry]/SAMPLE[sample]/name": "super",
+ "/ENTRY[entry]/SAMPLE[sample]/atom_types": "Si, C",
+ "/ENTRY[entry]/definition": "NXiv_temp",
+ "/ENTRY[entry]/definition/@version": "1",
+ "/ENTRY[entry]/experiment_identifier": "dbdfed37-35ed-4aee-a465-aaa0577205b1",
+ "/ENTRY[entry]/experiment_description": "A simple IV temperature experiment.",
+ "/ENTRY[entry]/start_time": "2022-05-30T16:37:03.909201+02:00"
+}
\ No newline at end of file
diff --git a/examples/json_map/merge_linked.mapping.json b/examples/json_map/merge_linked.mapping.json
new file mode 100644
index 000000000..47ede8b92
--- /dev/null
+++ b/examples/json_map/merge_linked.mapping.json
@@ -0,0 +1,25 @@
+{
+ "/@default": "entry",
+ "/ENTRY[entry]/DATA[data]/current": {"link": "current.nxs:/entry/data/current"},
+ "/ENTRY[entry]/DATA[data]/current_295C": {"link": "current.nxs:/entry/data/current_295C"},
+ "/ENTRY[entry]/DATA[data]/current_300C": {"link": "current.nxs:/entry/data/current_300C"},
+ "/ENTRY[entry]/DATA[data]/current_305C": {"link": "current.nxs:/entry/data/current_305C"},
+ "/ENTRY[entry]/DATA[data]/current_310C": {"link": "current.nxs:/entry/data/current_310C"},
+ "/ENTRY[entry]/DATA[data]/temperature": {"link": "voltage_and_temperature.nxs:/entry/data/temperature"},
+ "/ENTRY[entry]/DATA[data]/voltage": {"link": "voltage_and_temperature.nxs:/entry/data/voltage"},
+ "/ENTRY[entry]/INSTRUMENT[instrument]/ENVIRONMENT[environment]/voltage_controller": {"link": "voltage_and_temperature.nxs:/entry/instrument/environment/voltage_controller"},
+ "/ENTRY[entry]/INSTRUMENT[instrument]/ENVIRONMENT[environment]/temperature_controller": {"link": "voltage_and_temperature.nxs:/entry/instrument/environment/temperature_controller"},
+ "/ENTRY[entry]/INSTRUMENT[instrument]/ENVIRONMENT[environment]/current_sensor": {"link": "current.nxs:/entry/instrument/environment/current_sensor"},
+ "/ENTRY[entry]/INSTRUMENT[instrument]/ENVIRONMENT[environment]/independent_controllers": ["voltage_controller", "temperature_control"],
+ "/ENTRY[entry]/INSTRUMENT[instrument]/ENVIRONMENT[environment]/measurement_sensors": ["current_sensor"],
+ "/ENTRY[entry]/INSTRUMENT[instrument]/ENVIRONMENT[environment]/NXpid[heating_pid]": {"link": "voltage_and_temperature.nxs:/entry/instrument/environment/heating_pid"},
+ "/ENTRY[entry]/PROCESS[process]/program": "Bluesky",
+ "/ENTRY[entry]/PROCESS[process]/program/@version": "1.6.7",
+ "/ENTRY[entry]/SAMPLE[sample]/name": "super",
+ "/ENTRY[entry]/SAMPLE[sample]/atom_types": "Si, C",
+ "/ENTRY[entry]/definition": "NXiv_temp",
+ "/ENTRY[entry]/definition/@version": "1",
+ "/ENTRY[entry]/experiment_identifier": "dbdfed37-35ed-4aee-a465-aaa0577205b1",
+ "/ENTRY[entry]/experiment_description": "A simple IV temperature experiment.",
+ "/ENTRY[entry]/start_time": "2022-05-30T16:37:03.909201+02:00"
+}
\ No newline at end of file
diff --git a/examples/sts/README.md b/examples/sts/README.md
new file mode 100644
index 000000000..eb2c53482
--- /dev/null
+++ b/examples/sts/README.md
@@ -0,0 +1,32 @@
+# STS Reader
+***Note: Though the reader name is STS reader it also supports STM type experiment. This is the first version of the reader according to the NeXus application definition [NXsts](https://github.com/FAIRmat-NFDI/nexus_definitions/blob/fairmat/contributed_definitions/NXsts.nxdl.xml) which is a generic template of concepts' definition for STS and STM experiments. Later on, both application definitions and readers specific to the STM, STS and AFM will be available. To stay upto date keep visiting this page time to time. From now onwards we will mention STS referring both STM and STS.***
+
+Main goal of STS Reader is to transform different file formats from diverse STS lab into STS community standard [STS application definition](https://github.com/FAIRmat-NFDI/nexus_definitions/blob/fairmat/contributed_definitions/NXsts.nxdl.xml), community defined template that define indivisual concept associated with STS experiment constructed by SPM community.
+## STS Example
+It has diverse examples from several versions (Generic 5e and Generic 4.5) of Nanonis software for STS experiments at [https://gitlab.mpcdf.mpg.de](https://gitlab.mpcdf.mpg.de/nomad-lab/nomad-remote-tools-hub/-/tree/develop/docker/sts). But, to utilize that examples one must have an account at https://gitlab.mpcdf.mpg.de. If still you want to try the examples from the sts reader out, please reach out to [Rubel Mozumder](mozumder@physik.hu-berlin.de) or the docker container (discussed below).
+
+To get a detailed overview of the sts reader implementation visit [pynxtools](https://github.com/FAIRmat-NFDI/pynxtools/tree/master/pynxtools/dataconverter/readers/sts).
+
+## STS deocker image
+STS docker image contains all prerequisite tools (e.g. jupyter-notebook) and library to run STS reader. To use the image user needs to [install docker engine](https://docs.docker.com/engine/install/).
+
+STS Image: `gitlab-registry.mpcdf.mpg.de/nomad-lab/nomad-remote-tools-hub/sts-jupyter:latest`
+
+To run the STS image as a docker container copy the code below in a file `docker-compose.yaml`
+
+```docker
+# docker-compose.yaml
+
+version: "3.9"
+
+services:
+ sts:
+ image: gitlab-registry.mpcdf.mpg.de/nomad-lab/nomad-remote-tools-hub/sts-jupyter:latest
+ ports:
+ - 8888:8888
+ volumes:
+ - ./example:/home/jovyan/work_dir
+ working_dir: /home/jovyan/work_dir
+```
+
+and launch the file from the same directory with `docker compose up` command.
diff --git a/pynxtools/__init__.py b/pynxtools/__init__.py
index 2290aef3b..12b6f64ba 100644
--- a/pynxtools/__init__.py
+++ b/pynxtools/__init__.py
@@ -18,3 +18,71 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
+import os
+import re
+from datetime import datetime
+from glob import glob
+from typing import Union
+
+from pynxtools._build_wrapper import get_vcs_version
+from pynxtools.definitions.dev_tools.globals.nxdl import get_nxdl_version
+
+MAIN_BRANCH_NAME = "fairmat"
+
+
+def _build_version(tag: str, distance: int, node: str, dirty: bool) -> str:
+ """
+ Builds the version string for a given set of git states.
+ This resembles `no-guess-dev` + `node-and-date` behavior from setuptools_scm.
+ """
+ if distance == 0 and not dirty:
+ return f"{tag}"
+
+ dirty_appendix = datetime.now().strftime(".d%Y%m%d") if dirty else ""
+ return f"{tag}.post1.dev{distance}+{node}{dirty_appendix}"
+
+
+def format_version(version: str) -> str:
+ """
+ Formats the git describe version string into the local format.
+ """
+ version_parts = version.split("-")
+
+ return _build_version(
+ version_parts[0],
+ int(version_parts[1]),
+ version_parts[2],
+ len(version_parts) == 4 and version_parts[3] == "dirty",
+ )
+
+
+def get_nexus_version() -> str:
+ """
+ The version of the Nexus standard and the NeXus Definition language
+ based on git tags and commits
+ """
+ version = get_vcs_version()
+
+ if version is not None:
+ return format_version(version)
+
+ version_file = os.path.join(os.path.dirname(__file__), "nexus-version.txt")
+
+ if not os.path.exists(version_file):
+ # We are in the limbo, just get the nxdl version from nexus definitions
+ return format_version(get_nxdl_version())
+
+ with open(version_file, encoding="utf-8") as vfile:
+ return format_version(vfile.read().strip())
+
+
+def get_nexus_version_hash() -> str:
+ """
+ Gets the git hash from the nexus version string
+ """
+ version = re.search(r"g([a-z0-9]+)", get_nexus_version())
+
+ if version is None:
+ return MAIN_BRANCH_NAME
+
+ return version.group(1)
diff --git a/pynxtools/_build_wrapper.py b/pynxtools/_build_wrapper.py
new file mode 100644
index 000000000..d7788860d
--- /dev/null
+++ b/pynxtools/_build_wrapper.py
@@ -0,0 +1,71 @@
+"""
+Build wrapper for setuptools to create a nexus-version.txt file
+containing the nexus definitions verison.
+"""
+import os
+from subprocess import CalledProcessError, run
+from typing import Optional
+
+from setuptools import build_meta as _orig
+from setuptools.build_meta import * # pylint: disable=wildcard-import,unused-wildcard-import
+
+
+def get_vcs_version(tag_match="*[0-9]*") -> Optional[str]:
+ """
+ The version of the Nexus standard and the NeXus Definition language
+ based on git tags and commits
+ """
+ try:
+ return (
+ run(
+ [
+ "git",
+ "describe",
+ "--dirty",
+ "--tags",
+ "--long",
+ "--match",
+ tag_match,
+ ],
+ cwd=os.path.join(os.path.dirname(__file__), "../pynxtools/definitions"),
+ check=True,
+ capture_output=True,
+ )
+ .stdout.decode("utf-8")
+ .strip()
+ )
+ except (FileNotFoundError, CalledProcessError):
+ return None
+
+
+def _write_version_to_metadata():
+ version = get_vcs_version()
+ if version is None or not version:
+ return
+
+ with open(
+ os.path.join(os.path.dirname(__file__), "nexus-version.txt"),
+ "w+",
+ encoding="utf-8",
+ ) as file:
+ file.write(version)
+
+
+# pylint: disable=function-redefined
+def build_wheel(wheel_directory, config_settings=None, metadata_directory=None):
+ """
+ PEP 517 compliant build wheel hook.
+ This is a wrapper for setuptools and adds a nexus version file.
+ """
+ _write_version_to_metadata()
+ return _orig.build_wheel(wheel_directory, config_settings, metadata_directory)
+
+
+# pylint: disable=function-redefined
+def build_sdist(sdist_directory, config_settings=None):
+ """
+ PEP 517 compliant build sdist hook.
+ This is a wrapper for setuptools and adds a nexus version file.
+ """
+ _write_version_to_metadata()
+ return _orig.build_sdist(sdist_directory, config_settings)
diff --git a/pynxtools/dataconverter/README.md b/pynxtools/dataconverter/README.md
index 617c2de1f..f8d600f41 100644
--- a/pynxtools/dataconverter/README.md
+++ b/pynxtools/dataconverter/README.md
@@ -23,7 +23,7 @@ Usage: dataconverter [OPTIONS]
Options:
--input-file TEXT The path to the input data file to read.
(Repeat for more than one file.)
- --reader [apm|ellips|em_nion|em_spctrscpy|example|hall|json_map|json_yml|mpes|rii_database|transmission|xps]
+ --reader [apm|ellips|em_nion|em_om|em_spctrscpy|example|hall|json_map|json_yml|mpes|rii_database|sts|transmission|xps]
The reader to use. default="example"
--nxdl TEXT The name of the NXDL file to use without
extension.
@@ -35,9 +35,28 @@ Options:
checking the documentation.
--params-file FILENAME Allows to pass a .yaml file with all the
parameters the converter supports.
+ --undocumented Shows a log output for all undocumented
+ fields
+ --mapping TEXT Takes a
.mapping.json file and
+ converts data from given input files.
--help Show this message and exit.
```
+#### Merge partial NeXus files into one
+
+```console
+user@box:~$ dataconverter --nxdl nxdl --input-file partial1.nxs --input-file partial2.nxs
+```
+
+#### Map an HDF5/JSON/(Python Dict pickled in a pickle file)
+
+```console
+user@box:~$ dataconverter --nxdl nxdl --input-file any_data.hdf5 --mapping my_custom_map.mapping.json
+```
+
+#### You can find actual examples with data files at [`examples/json_map`](../../examples/json_map/).
+
+
#### Use with multiple input files
```console
diff --git a/pynxtools/dataconverter/convert.py b/pynxtools/dataconverter/convert.py
index f63e782e2..46c9af7eb 100644
--- a/pynxtools/dataconverter/convert.py
+++ b/pynxtools/dataconverter/convert.py
@@ -22,22 +22,26 @@
import logging
import os
import sys
-from shutil import copyfile
-from typing import List, Tuple
+from typing import List, Tuple, Optional
import xml.etree.ElementTree as ET
import click
import yaml
-
from pynxtools.dataconverter.readers.base.reader import BaseReader
from pynxtools.dataconverter import helpers
from pynxtools.dataconverter.writer import Writer
from pynxtools.dataconverter.template import Template
from pynxtools.nexus import nexus
+if sys.version_info >= (3, 10):
+ from importlib.metadata import entry_points
+else:
+ from importlib_metadata import entry_points
+
logger = logging.getLogger(__name__) # pylint: disable=C0103
+UNDOCUMENTED = 9
logger.setLevel(logging.INFO)
logger.addHandler(logging.StreamHandler(sys.stdout))
@@ -47,8 +51,18 @@ def get_reader(reader_name) -> BaseReader:
path_prefix = f"{os.path.dirname(__file__)}{os.sep}" if os.path.dirname(__file__) else ""
path = os.path.join(path_prefix, "readers", reader_name, "reader.py")
spec = importlib.util.spec_from_file_location("reader.py", path)
- module = importlib.util.module_from_spec(spec)
- spec.loader.exec_module(module) # type: ignore[attr-defined]
+ try:
+ module = importlib.util.module_from_spec(spec)
+ spec.loader.exec_module(module) # type: ignore[attr-defined]
+ except FileNotFoundError as exc:
+ # pylint: disable=unexpected-keyword-arg
+ importlib_module = entry_points(group='pynxtools.reader')
+ if (
+ importlib_module
+ and reader_name in map(lambda ep: ep.name, entry_points(group='pynxtools.reader'))
+ ):
+ return importlib_module[reader_name].load()
+ raise ValueError(f"The reader, {reader_name}, was not found.") from exc
return module.READER # type: ignore[attr-defined]
@@ -62,96 +76,150 @@ def get_names_of_all_readers() -> List[str]:
index_of_readers_folder_name = file.rindex(f"readers{os.sep}") + len(f"readers{os.sep}")
index_of_last_path_sep = file.rindex(os.sep)
all_readers.append(file[index_of_readers_folder_name:index_of_last_path_sep])
- return all_readers
-
-
-def append_template_data_to_acopy_of_one_inputfile(input: Tuple[str], output: str):
- """Helper function to build outputfile based on one inputfile plus template data."""
- # There are cases in which one of the inputfiles may contain already NeXus content
- # typically because the scientific software tool generates such a file
- # matching a specific application definition and thus additional pieces of information
- # inside the template (e.g. from an ELN) should just be added to that inputfile
-
- # one may or not in this case demand for a verification of that input file
- # before continuing, currently we ignore this verification
- for file_name in input:
- if file_name[0:file_name.rfind('.')] != output:
- continue
- else:
- print(f"Creating the output {output} based the this input {file_name}\n" \
- f"NeXus content in {file_name} is currently not verified !!!")
- copyfile(file_name, output)
-
- print(f"Template data will be added to the output {output}...\n" \
- f"Only these template data will be verified !!!")
- # when calling dataconverter with
- # --input-file processed.nxs.mtex
- # --output processed.nxs
- # -- io_mode="r+"
- # these calls can be executed repetitively as the first step is
- # the copying operation of *.nxs.mtex to *.nxs and then the access on the *.nxs
- # file using h5py is then read/write without regeneration
- # a repeated call has factually the same effect as the dataconverter
- # used to work i.e. using h5py with "w" would regenerate the *.nxs if already existent
- # this is a required to assure that repetitive calls of the ELN save function
- # in NOMAD do not end up with write conflicts on the *.nxs i.e. the output file
- # when the dataconverter is called
- return
-
-
-# pylint: disable=too-many-arguments
-def convert(input_file: Tuple[str],
- reader: str,
- nxdl: str,
- output: str,
- io_mode: str = "w",
- generate_template: bool = False,
- fair: bool = False,
- **kwargs):
- """The conversion routine that takes the input parameters and calls the necessary functions."""
+ plugins = list(map(lambda ep: ep.name, entry_points(group='pynxtools.reader')))
+ return all_readers + plugins
+
+
+def get_nxdl_root_and_path(nxdl: str):
+ """Get xml root element and file path from nxdl name e.g. NXapm.
+
+ Parameters
+ ----------
+ nxdl: str
+ Name of nxdl file e.g. NXapm from NXapm.nxdl.xml.
+
+ Returns
+ -------
+ ET.root
+ Root element of nxdl file.
+ str
+ Path of nxdl file.
+
+ Raises
+ ------
+ FileNotFoundError
+ Error if no file with the given nxdl name is found.
+ """
# Reading in the NXDL and generating a template
definitions_path = nexus.get_nexus_definitions_path()
if nxdl == "NXtest":
- nxdl_path = os.path.join(
+ nxdl_f_path = os.path.join(
f"{os.path.abspath(os.path.dirname(__file__))}/../../",
"tests", "data", "dataconverter", "NXtest.nxdl.xml")
elif nxdl == "NXroot":
- nxdl_path = os.path.join(definitions_path, "base_classes", "NXroot.nxdl.xml")
+ nxdl_f_path = os.path.join(definitions_path, "base_classes", "NXroot.nxdl.xml")
else:
- nxdl_path = os.path.join(definitions_path, "contributed_definitions", f"{nxdl}.nxdl.xml")
- if not os.path.exists(nxdl_path):
- nxdl_path = os.path.join(definitions_path, "applications", f"{nxdl}.nxdl.xml")
- if not os.path.exists(nxdl_path):
+ nxdl_f_path = os.path.join(definitions_path, "contributed_definitions", f"{nxdl}.nxdl.xml")
+ if not os.path.exists(nxdl_f_path):
+ nxdl_f_path = os.path.join(definitions_path, "applications", f"{nxdl}.nxdl.xml")
+ if not os.path.exists(nxdl_f_path):
+ nxdl_f_path = os.path.join(definitions_path, "base_classes", f"{nxdl}.nxdl.xml")
+ if not os.path.exists(nxdl_f_path):
raise FileNotFoundError(f"The nxdl file, {nxdl}, was not found.")
- nxdl_root = ET.parse(nxdl_path).getroot()
+ return ET.parse(nxdl_f_path).getroot(), nxdl_f_path
+
+
+def transfer_data_into_template(input_file,
+ reader, nxdl_name,
+ nxdl_root: Optional[ET.Element] = None,
+ **kwargs):
+ """Transfer parse and merged data from input experimental file, config file and eln.
+
+ Experimental and eln files will be parsed and finally will be merged into template.
+ Before returning the template validate the template data.
+
+ Parameters
+ ----------
+ input_file : Union[tuple[str], str]
+ Tuple of files or file
+ reader: str
+ Name of reader such as xps
+ nxdl_name : str
+ Root name of nxdl file, e.g. NXmpes from NXmpes.nxdl.xml
+ nxdl_root : ET.element
+ Root element of nxdl file, otherwise provide nxdl_name
+
+ Returns
+ -------
+ Template
+ Template filled with data from raw file and eln file.
+
+ """
+ if nxdl_root is None:
+ nxdl_root, _ = get_nxdl_root_and_path(nxdl=nxdl_name)
template = Template()
helpers.generate_template_from_nxdl(nxdl_root, template)
- if generate_template:
- logger.info(template)
- return
- # Setting up all the input data
if isinstance(input_file, str):
input_file = (input_file,)
+
bulletpoint = "\n\u2022 "
logger.info("Using %s reader to convert the given files: %s ",
reader,
bulletpoint.join((" ", *input_file)))
data_reader = get_reader(reader)
- if not (nxdl in data_reader.supported_nxdls or "*" in data_reader.supported_nxdls):
+ if not (nxdl_name in data_reader.supported_nxdls or "*" in data_reader.supported_nxdls):
raise NotImplementedError("The chosen NXDL isn't supported by the selected reader.")
data = data_reader().read( # type: ignore[operator]
template=Template(template),
file_paths=input_file,
- **kwargs,
+ **kwargs
)
-
helpers.validate_data_dict(template, data, nxdl_root)
+ return data
+
+
+# pylint: disable=too-many-arguments,too-many-locals
+def convert(input_file: Tuple[str, ...],
+ reader: str,
+ nxdl: str,
+ output: str,
+ generate_template: bool = False,
+ fair: bool = False,
+ undocumented: bool = False,
+ **kwargs):
+ """The conversion routine that takes the input parameters and calls the necessary functions.
+
+ Parameters
+ ----------
+ input_file : Tuple[str]
+ Tuple of files or file
+ reader: str
+ Name of reader such as xps
+ nxdl : str
+ Root name of nxdl file, e.g. NXmpes for NXmpes.nxdl.xml
+ output : str
+ Output file name.
+ generate_template : bool, default False
+ True if user wants template in logger info.
+ fair : bool, default False
+ If True, a warning is given that there are undocumented paths
+ in the template.
+ undocumented : bool, default False
+ If True, an undocumented warning is given.
+
+ Returns
+ -------
+ None.
+ """
+
+ nxdl_root, nxdl_f_path = get_nxdl_root_and_path(nxdl)
+
+ if generate_template:
+ template = Template()
+ helpers.generate_template_from_nxdl(nxdl_root, template)
+ logger.info(template)
+ return
+ data = transfer_data_into_template(input_file=input_file, reader=reader,
+ nxdl_name=nxdl, nxdl_root=nxdl_root,
+ **kwargs)
+ if undocumented:
+ logger.setLevel(UNDOCUMENTED)
if fair and data.undocumented.keys():
logger.warning("There are undocumented paths in the template. This is not acceptable!")
return
@@ -159,13 +227,13 @@ def convert(input_file: Tuple[str],
for path in data.undocumented.keys():
if "/@default" in path:
continue
- logger.warning("The path, %s, is being written but has no documentation.", path)
-
- if io_mode == "r+":
- append_template_data_to_acopy_of_one_inputfile(
- input=input_file, output=output)
-
- Writer(data=data, nxdl_path=nxdl_path, output_path=output, io_mode=io_mode).write()
+ logger.log(
+ UNDOCUMENTED,
+ "The path, %s, is being written but has no documentation.",
+ path
+ )
+ helpers.add_default_root_attributes(data=data, filename=os.path.basename(output))
+ Writer(data=data, nxdl_f_path=nxdl_f_path, output_path=output).write()
logger.info("The output file generated: %s", output)
@@ -187,7 +255,7 @@ def parse_params_file(params_file):
)
@click.option(
'--reader',
- default='example',
+ default='json_map',
type=click.Choice(get_names_of_all_readers(), case_sensitive=False),
help='The reader to use. default="example"'
)
@@ -202,11 +270,6 @@ def parse_params_file(params_file):
default='output.nxs',
help='The path to the output NeXus file to be generated.'
)
-@click.option(
- '--io_mode',
- default='w',
- help='I/O mode on the output NeXus file, see h5py doc for mode details, default="w".'
-)
@click.option(
'--generate-template',
is_flag=True,
@@ -218,21 +281,33 @@ def parse_params_file(params_file):
is_flag=True,
default=False,
help='Let the converter know to be stricter in checking the documentation.'
-) # pylint: disable=too-many-arguments
+)
@click.option(
'--params-file',
type=click.File('r'),
default=None,
help='Allows to pass a .yaml file with all the parameters the converter supports.'
)
-def convert_cli(input_file: Tuple[str],
+@click.option(
+ '--undocumented',
+ is_flag=True,
+ default=False,
+ help='Shows a log output for all undocumented fields'
+)
+@click.option(
+ '--mapping',
+ help='Takes a .mapping.json file and converts data from given input files.'
+)
+# pylint: disable=too-many-arguments
+def convert_cli(input_file: Tuple[str, ...],
reader: str,
nxdl: str,
output: str,
- io_mode: str,
generate_template: bool,
fair: bool,
- params_file: str):
+ params_file: str,
+ undocumented: bool,
+ mapping: str):
"""The CLI entrypoint for the convert function"""
if params_file:
try:
@@ -248,7 +323,11 @@ def convert_cli(input_file: Tuple[str],
sys.tracebacklimit = 0
raise IOError("\nError: Please supply an NXDL file with the option:"
" --nxdl ")
- convert(input_file, reader, nxdl, output, io_mode, generate_template, fair)
+ if mapping:
+ reader = "json_map"
+ if mapping:
+ input_file = input_file + tuple([mapping])
+ convert(input_file, reader, nxdl, output, generate_template, fair, undocumented)
if __name__ == '__main__':
diff --git a/pynxtools/dataconverter/hdfdict.py b/pynxtools/dataconverter/hdfdict.py
index 4edb68259..a4bbf87e6 100644
--- a/pynxtools/dataconverter/hdfdict.py
+++ b/pynxtools/dataconverter/hdfdict.py
@@ -123,7 +123,16 @@ def _recurse(hdfobject, datadict):
elif isinstance(value, h5py.Dataset):
if not lazy:
value = unpacker(value)
- datadict[key] = value
+ datadict[key] = (
+ value.asstr()[...]
+ if h5py.check_string_dtype(value.dtype)
+ else value
+ )
+
+ if "attrs" in dir(value):
+ datadict[key + "@"] = {}
+ for attr, attrval in value.attrs.items():
+ datadict[key + "@"][attr] = attrval
return datadict
diff --git a/pynxtools/dataconverter/helpers.py b/pynxtools/dataconverter/helpers.py
index 75a2bc2b9..57d526f4b 100644
--- a/pynxtools/dataconverter/helpers.py
+++ b/pynxtools/dataconverter/helpers.py
@@ -17,17 +17,24 @@
#
"""Helper functions commonly used by the convert routine."""
-from typing import List
+from typing import List, Optional, Any
from typing import Tuple, Callable, Union
import re
import xml.etree.ElementTree as ET
+from datetime import datetime, timezone
+import logging
+import json
import numpy as np
from ase.data import chemical_symbols
+import h5py
+from pynxtools import get_nexus_version, get_nexus_version_hash
from pynxtools.nexus import nexus
from pynxtools.nexus.nexus import NxdlAttributeError
+logger = logging.getLogger(__name__)
+
def is_a_lone_group(xml_element) -> bool:
"""Checks whether a given group XML element has no field or attributes mentioned"""
@@ -155,6 +162,20 @@ def generate_template_from_nxdl(root, template, path="", nxdl_root=None, nxdl_na
path_nxdl = convert_data_converter_dict_to_nxdl_path(path)
list_of_children_to_add = get_all_defined_required_children(path_nxdl, nxdl_name)
add_inherited_children(list_of_children_to_add, path, nxdl_root, template)
+ # Handling link: link has a target attibute that store absolute path of concept to be
+ # linked. Writer reads link from template in the format {'link': }
+ # {'link': ':/'}
+ elif tag == "link":
+ # NOTE: The code below can be implemented later once, NeXus brings optionality in
+ # link. Otherwise link will be considered optional by default.
+
+ # optionality = get_required_string(root)
+ # optional_parent = check_for_optional_parent(path, nxdl_root)
+ # optionality = "required" if optional_parent == "<>" else "optional"
+ # if optionality == "optional":
+ # template.optional_parents.append(optional_parent)
+ optionality = "optional"
+ template[optionality][path] = {'link': root.attrib['target']}
for child in root:
generate_template_from_nxdl(child, template, path, nxdl_root, nxdl_name)
@@ -333,7 +354,7 @@ def path_in_data_dict(nxdl_path: str, data: dict) -> Tuple[bool, str]:
for key in data.keys():
if nxdl_path == convert_data_converter_dict_to_nxdl_path(key):
return True, key
- return False, ""
+ return False, None
def check_for_optional_parent(path: str, nxdl_root: ET.Element) -> str:
@@ -366,6 +387,8 @@ def all_required_children_are_set(optional_parent_path, data, nxdl_root):
"""Walks over optional parent's children and makes sure all required ones are set"""
optional_parent_path = convert_data_converter_dict_to_nxdl_path(optional_parent_path)
for key in data:
+ if key in data["lone_groups"]:
+ continue
nxdl_key = convert_data_converter_dict_to_nxdl_path(key)
if nxdl_key[0:nxdl_key.rfind("/")] == optional_parent_path \
and is_node_required(nxdl_key, nxdl_root) \
@@ -424,7 +447,7 @@ def does_group_exist(path_to_group, data):
return False
-def ensure_all_required_fields_exist(template, data):
+def ensure_all_required_fields_exist(template, data, nxdl_root):
"""Checks whether all the required fields are in the returned data object."""
for path in template["required"]:
entry_name = get_name_from_data_dict_entry(path[path.rindex('/') + 1:])
@@ -432,9 +455,18 @@ def ensure_all_required_fields_exist(template, data):
continue
nxdl_path = convert_data_converter_dict_to_nxdl_path(path)
is_path_in_data_dict, renamed_path = path_in_data_dict(nxdl_path, data)
- if path in template["lone_groups"] and does_group_exist(path, data):
- continue
+ renamed_path = path if renamed_path is None else renamed_path
+ if path in template["lone_groups"]:
+ opt_parent = check_for_optional_parent(path, nxdl_root)
+ if opt_parent != "<>":
+ if does_group_exist(opt_parent, data) and not does_group_exist(renamed_path, data):
+ raise ValueError(f"The required group, {path}, hasn't been supplied"
+ f" while its optional parent, {path}, is supplied.")
+ continue
+ if not does_group_exist(renamed_path, data):
+ raise ValueError(f"The required group, {path}, hasn't been supplied.")
+ continue
if not is_path_in_data_dict or data[renamed_path] is None:
raise ValueError(f"The data entry corresponding to {path} is required "
f"and hasn't been supplied by the reader.")
@@ -475,11 +507,10 @@ def validate_data_dict(template, data, nxdl_root: ET.Element):
nxdl_path_to_elm: dict = {}
# Make sure all required fields exist.
- ensure_all_required_fields_exist(template, data)
+ ensure_all_required_fields_exist(template, data, nxdl_root)
try_undocumented(data, nxdl_root)
for path in data.get_documented().keys():
- # print(f"{path}")
if data[path] is not None:
entry_name = get_name_from_data_dict_entry(path[path.rindex('/') + 1:])
nxdl_path = convert_data_converter_dict_to_nxdl_path(path)
@@ -559,12 +590,38 @@ def convert_to_hill(atoms_typ):
return atom_list + list(atoms_typ)
+def add_default_root_attributes(data, filename):
+ """
+ Takes a dict/Template and adds NXroot fields/attributes that are inherently available
+ """
+ def update_and_warn(key: str, value: str):
+ if key in data and data[key] != value:
+ logger.warning(
+ "The NXroot entry '%s' (value: %s) should not be populated by the reader. "
+ "This is overwritten by the actually used value '%s'",
+ key, data[key], value
+ )
+ data[key] = value
+
+ update_and_warn("/@NX_class", "NXroot")
+ update_and_warn("/@file_name", filename)
+ update_and_warn("/@file_time", str(datetime.now(timezone.utc).astimezone()))
+ update_and_warn("/@file_update_time", data["/@file_time"])
+ update_and_warn(
+ "/@NeXus_repository",
+ "https://github.com/FAIRmat-NFDI/nexus_definitions/"
+ f"blob/{get_nexus_version_hash()}"
+ )
+ update_and_warn("/@NeXus_version", get_nexus_version())
+ update_and_warn("/@HDF5_version", '.'.join(map(str, h5py.h5.get_libversion())))
+ update_and_warn("/@h5py_version", h5py.__version__)
+
+
def extract_atom_types(formula, mode='hill'):
"""Extract atom types form chemical formula."""
-
atom_types: set = set()
element: str = ""
- # tested with "(C38H54S4)n(NaO2)5(CH4)NH3B"
+
for char in formula:
if char.isalpha():
if char.isupper() and element == "":
@@ -594,3 +651,77 @@ def extract_atom_types(formula, mode='hill'):
return convert_to_hill(atom_types)
return atom_types
+
+
+# pylint: disable=too-many-branches
+def transform_to_intended_dt(str_value: Any) -> Optional[Any]:
+ """Transform string to the intended data type, if not then return str_value.
+
+ E.g '2.5E-2' will be transfor into 2.5E-2
+ tested with: '2.4E-23', '28', '45.98', 'test', ['59', '3.00005', '498E-34'],
+ '23 34 444 5000', None
+ with result: 2.4e-23, 28, 45.98, test, [5.90000e+01 3.00005e+00 4.98000e-32],
+ np.array([23 34 444 5000]), None
+ NOTE: add another arg in this func for giving 'hint' what kind of data like
+ numpy array or list
+ Parameters
+ ----------
+ str_value : str
+ Data from other format that comes as string e.g. string of list.
+
+ Returns
+ -------
+ Union[str, int, float, np.ndarray]
+ Converted data type
+ """
+
+ symbol_list_for_data_seperation = [';', ' ']
+ transformed: Any = None
+
+ if isinstance(str_value, list):
+ try:
+ transformed = np.array(str_value, dtype=np.float64)
+ return transformed
+ except ValueError:
+ pass
+
+ elif isinstance(str_value, np.ndarray):
+ return str_value
+ elif isinstance(str_value, str):
+ try:
+ transformed = int(str_value)
+ except ValueError:
+ try:
+ transformed = float(str_value)
+ except ValueError:
+ if '[' in str_value and ']' in str_value:
+ transformed = json.loads(str_value)
+ if transformed is not None:
+ return transformed
+ for sym in symbol_list_for_data_seperation:
+ if sym in str_value:
+ parts = str_value.split(sym)
+ modified_parts: List = []
+ for part in parts:
+ part = transform_to_intended_dt(part)
+ if isinstance(part, (int, float)):
+ modified_parts.append(part)
+ else:
+ return str_value
+ return transform_to_intended_dt(modified_parts)
+
+ return str_value
+
+
+def nested_dict_to_slash_separated_path(nested_dict: dict,
+ flattened_dict: dict,
+ parent_path=''):
+ """Convert nested dict into slash separeted path upto certain level."""
+ sep = '/'
+
+ for key, val in nested_dict.items():
+ path = parent_path + sep + key
+ if isinstance(val, dict):
+ nested_dict_to_slash_separated_path(val, flattened_dict, path)
+ else:
+ flattened_dict[path] = val
diff --git a/pynxtools/dataconverter/readers/apm/map_concepts/apm_deployment_specifics_to_nx_map.py b/pynxtools/dataconverter/readers/apm/map_concepts/apm_deployment_specifics_to_nx_map.py
new file mode 100644
index 000000000..d4cdf84f6
--- /dev/null
+++ b/pynxtools/dataconverter/readers/apm/map_concepts/apm_deployment_specifics_to_nx_map.py
@@ -0,0 +1,52 @@
+#
+# Copyright The NOMAD Authors.
+#
+# This file is part of NOMAD. See https://nomad-lab.eu for further info.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+"""Dict mapping values for a specific deployed config of NOMAD OASIS + ELN + apm reader."""
+
+# pylint: disable=no-member,line-too-long
+
+# currently by virtue of design NOMAD OASIS specific examples show how different tools and
+# services can be specifically coupled and implemented so that they work together
+# currently we assume that the ELN provides all those pieces of information to instantiate
+# a NeXus data artifact which technology-partner-specific files or database blobs can not
+# deliver. Effectively a reader uses the eln_data.yaml generic ELN output to fill in these
+# missing pieces of information while typically heavy data (tensors etc) are translated
+# and written from the technology-partner files
+# for large application definitions this can lead to a practical inconvenience:
+# the ELN that has to be exposed to the user is complex and has many fields to fill in
+# just to assure that all information are included in the ELN output and thus consumable
+# by the dataconverter
+# taking the perspective of a specific lab where a specific version of an ELN provided by
+# or running in addition to NOMAD OASIS is used many pieces of information might not change
+# or administrators do not wish to expose this via the end user ELN in an effort to reduce
+# the complexity for end users and make entering of repetitiv information obsolete
+
+# this is the scenario for which deployment_specific mapping shines
+# parsing of deployment specific details in the apm reader is currently implemented
+# such that it executes after reading generic ELN data (eventually available entries)
+# in the template get overwritten
+
+from pynxtools.dataconverter.readers.apm.utils.apm_versioning \
+ import NX_APM_ADEF_NAME, NX_APM_ADEF_VERSION, NX_APM_EXEC_NAME, NX_APM_EXEC_VERSION
+
+
+NxApmDeploymentSpecificInput \
+ = {"/ENTRY[entry*]/@version": f"{NX_APM_ADEF_VERSION}",
+ "/ENTRY[entry*]/definition": f"{NX_APM_ADEF_NAME}",
+ "/ENTRY[entry*]/PROGRAM[program1]/program": f"{NX_APM_EXEC_NAME}",
+ "/ENTRY[entry*]/PROGRAM[program1]/program/@version": f"{NX_APM_EXEC_VERSION}",
+ "/ENTRY[entry*]/atom_probe/location": {"fun": "load_from", "terms": "location"}}
diff --git a/pynxtools/dataconverter/readers/apm/map_concepts/apm_eln_to_nx_map.py b/pynxtools/dataconverter/readers/apm/map_concepts/apm_eln_to_nx_map.py
new file mode 100644
index 000000000..76c763f47
--- /dev/null
+++ b/pynxtools/dataconverter/readers/apm/map_concepts/apm_eln_to_nx_map.py
@@ -0,0 +1,109 @@
+#
+# Copyright The NOMAD Authors.
+#
+# This file is part of NOMAD. See https://nomad-lab.eu for further info.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+"""Dict mapping custom schema instances from eln_data.yaml file on concepts in NXapm."""
+
+NxApmElnInput = {"IGNORE": {"fun": "load_from_dict_list", "terms": "em_lab/detector"},
+ "IGNORE": {"fun": "load_from", "terms": "em_lab/ebeam_column/aberration_correction/applied"},
+ "IGNORE": {"fun": "load_from_dict_list", "terms": "em_lab/ebeam_column/aperture_em"},
+ "/ENTRY[entry*]/PROGRAM[program2]/program": {"fun": "load_from", "terms": "atom_probe/control_software_program"},
+ "/ENTRY[entry*]/PROGRAM[program2]/program/@version": {"fun": "load_from", "terms": "atom_probe/control_software_program__attr_version"},
+ "/ENTRY[entry*]/experiment_identifier": {"fun": "load_from", "terms": "entry/experiment_identifier"},
+ "/ENTRY[entry*]/start_time": {"fun": "load_from", "terms": "entry/start_time"},
+ "/ENTRY[entry*]/end_time": {"fun": "load_from", "terms": "entry/end_time"},
+ "/ENTRY[entry*]/run_number": {"fun": "load_from", "terms": "entry/run_number"},
+ "/ENTRY[entry*]/operation_mode": {"fun": "load_from", "terms": "entry/operation_mode"},
+ "/ENTRY[entry*]/experiment_description": {"fun": "load_from", "terms": "entry/experiment_description"},
+ "IGNORE": {"fun": "load_from", "terms": "sample/alias"},
+ "/ENTRY[entry*]/sample/grain_diameter": {"fun": "load_from", "terms": "sample/grain_diameter/value"},
+ "/ENTRY[entry*]/sample/grain_diameter/@units": {"fun": "load_from", "terms": "sample/grain_diameter/unit"},
+ "/ENTRY[entry*]/sample/grain_diameter_error": {"fun": "load_from", "terms": "sample/grain_diameter_error/value"},
+ "/ENTRY[entry*]/sample/grain_diameter_error/@units": {"fun": "load_from", "terms": "sample/grain_diameter_error/unit"},
+ "/ENTRY[entry*]/sample/heat_treatment_quenching_rate": {"fun": "load_from", "terms": "sample/heat_treatment_quenching_rate/value"},
+ "/ENTRY[entry*]/sample/heat_treatment_quenching_rate/@units": {"fun": "load_from", "terms": "sample/heat_treatment_quenching_rate/unit"},
+ "/ENTRY[entry*]/sample/heat_treatment_quenching_rate_error": {"fun": "load_from", "terms": "sample/heat_treatment_quenching_rate_error/value"},
+ "/ENTRY[entry*]/sample/heat_treatment_quenching_rate_error/@units": {"fun": "load_from", "terms": "sample/heat_treatment_quenching_rate_error/unit"},
+ "/ENTRY[entry*]/sample/heat_treatment_temperature": {"fun": "load_from", "terms": "sample/heat_treatment_temperature/value"},
+ "/ENTRY[entry*]/sample/heat_treatment_temperature/@units": {"fun": "load_from", "terms": "sample/heat_treatment_temperature/unit"},
+ "/ENTRY[entry*]/sample/heat_treatment_temperature_error": {"fun": "load_from", "terms": "sample/heat_treatment_temperature_error/value"},
+ "/ENTRY[entry*]/sample/heat_treatment_temperature_error/@units": {"fun": "load_from", "terms": "sample/heat_treatment_temperature_error/unit"},
+ "/ENTRY[entry*]/specimen/name": {"fun": "load_from", "terms": "specimen/name"},
+ "/ENTRY[entry*]/specimen/preparation_date": {"fun": "load_from", "terms": "specimen/preparation_date"},
+ "IGNORE": {"fun": "load_from", "terms": "specimen/sample_history"},
+ "/ENTRY[entry*]/specimen/alias": {"fun": "load_from", "terms": "specimen/alias"},
+ "/ENTRY[entry*]/specimen/is_polycrystalline": {"fun": "load_from", "terms": "specimen/is_polycrystalline"},
+ "/ENTRY[entry*]/specimen/description": {"fun": "load_from", "terms": "specimen/description"},
+ "/ENTRY[entry*]/atom_probe/FABRICATION[fabrication]/identifier": {"fun": "load_from", "terms": "atom_probe/fabrication_identifier"},
+ "/ENTRY[entry*]/atom_probe/FABRICATION[fabrication]/model": {"fun": "load_from", "terms": "atom_probe/fabrication_model"},
+ "/ENTRY[entry*]/atom_probe/FABRICATION[fabrication]/vendor": {"fun": "load_from", "terms": "atom_probe/fabrication_vendor"},
+ "/ENTRY[entry*]/atom_probe/analysis_chamber/pressure": {"fun": "load_from", "terms": "atom_probe/analysis_chamber_pressure/value"},
+ "/ENTRY[entry*]/atom_probe/analysis_chamber/pressure/@units": {"fun": "load_from", "terms": "atom_probe/analysis_chamber_pressure/unit"},
+ "/ENTRY[entry*]/atom_probe/control_software/PROGRAM[program1]/program": {"fun": "load_from", "terms": "atom_probe/control_software_program"},
+ "/ENTRY[entry*]/atom_probe/control_software/PROGRAM[program1]/program/@version": {"fun": "load_from", "terms": "atom_probe/control_software_program__attr_version"},
+ "/ENTRY[entry*]/atom_probe/field_of_view": {"fun": "load_from", "terms": "atom_probe/field_of_view/value"},
+ "/ENTRY[entry*]/atom_probe/field_of_view/@units": {"fun": "load_from", "terms": "atom_probe/field_of_view/unit"},
+ "/ENTRY[entry*]/atom_probe/flight_path_length": {"fun": "load_from", "terms": "atom_probe/flight_path_length/value"},
+ "/ENTRY[entry*]/atom_probe/flight_path_length/@units": {"fun": "load_from", "terms": "atom_probe/flight_path_length/unit"},
+ "/ENTRY[entry*]/atom_probe/instrument_name": {"fun": "load_from", "terms": "atom_probe/instrument_name"},
+ "/ENTRY[entry*]/atom_probe/ion_detector/model": {"fun": "load_from", "terms": "atom_probe/ion_detector_model"},
+ "/ENTRY[entry*]/atom_probe/ion_detector/name": {"fun": "load_from", "terms": "atom_probe/ion_detector_name"},
+ "/ENTRY[entry*]/atom_probe/ion_detector/serial_number": {"fun": "load_from", "terms": "atom_probe/ion_detector_serial_number"},
+ "/ENTRY[entry*]/atom_probe/ion_detector/type": {"fun": "load_from", "terms": "atom_probe/ion_detector_type"},
+ "/ENTRY[entry*]/atom_probe/local_electrode/name": {"fun": "load_from", "terms": "atom_probe/local_electrode_name"},
+ "/ENTRY[entry*]/atom_probe/location": {"fun": "load_from", "terms": "atom_probe/location"},
+ "/ENTRY[entry*]/atom_probe/REFLECTRON[reflectron]/applied": {"fun": "load_from", "terms": "atom_probe/reflectron_applied"},
+ "/ENTRY[entry*]/atom_probe/stage_lab/base_temperature": {"fun": "load_from", "terms": "atom_probe/stage_lab_base_temperature/value"},
+ "/ENTRY[entry*]/atom_probe/stage_lab/base_temperature/@units": {"fun": "load_from", "terms": "atom_probe/stage_lab_base_temperature/unit"},
+ "/ENTRY[entry*]/atom_probe/specimen_monitoring/detection_rate": {"fun": "load_from", "terms": "atom_probe/specimen_monitoring_detection_rate/value"},
+ "/ENTRY[entry*]/atom_probe/specimen_monitoring/detection_rate/@units": {"fun": "load_from", "terms": "atom_probe/specimen_monitoring_detection_rate/unit"},
+ "/ENTRY[entry*]/atom_probe/specimen_monitoring/initial_radius": {"fun": "load_from", "terms": "atom_probe/specimen_monitoring_initial_radius/value"},
+ "/ENTRY[entry*]/atom_probe/specimen_monitoring/initial_radius/@units": {"fun": "load_from", "terms": "atom_probe/specimen_monitoring_initial_radius/unit"},
+ "/ENTRY[entry*]/atom_probe/specimen_monitoring/shank_angle": {"fun": "load_from", "terms": "atom_probe/specimen_monitoring_shank_angle/value"},
+ "/ENTRY[entry*]/atom_probe/specimen_monitoring/shank_angle/@units": {"fun": "load_from", "terms": "atom_probe/specimen_monitoring_shank_angle/unit"},
+ "/ENTRY[entry*]/atom_probe/status": {"fun": "load_from", "terms": "atom_probe/status"},
+ "/ENTRY[entry*]/atom_probe/pulser/pulse_fraction": {"fun": "load_from", "terms": "atom_probe/pulser/pulse_fraction"},
+ "/ENTRY[entry*]/atom_probe/pulser/pulse_frequency": {"fun": "load_from", "terms": "atom_probe/pulser/pulse_frequency/value"},
+ "/ENTRY[entry*]/atom_probe/pulser/pulse_frequency/@units": {"fun": "load_from", "terms": "atom_probe/pulser/pulse_frequency/unit"},
+ "/ENTRY[entry*]/atom_probe/pulser/pulse_mode": {"fun": "load_from", "terms": "atom_probe/pulser/pulse_mode"},
+ "/ENTRY[entry*]/atom_probe/ranging/PROGRAM[program1]/program": {"fun": "load_from", "terms": "atom_probe/ranging/program"},
+ "/ENTRY[entry*]/atom_probe/ranging/PROGRAM[program1]/program/@version": {"fun": "load_from", "terms": "atom_probe/ranging/program__attr_version"},
+ "/ENTRY[entry*]/atom_probe/reconstruction/PROGRAM[program1]/program": {"fun": "load_from", "terms": "atom_probe/reconstruction/program"},
+ "/ENTRY[entry*]/atom_probe/reconstruction/PROGRAM[program1]/program/@version": {"fun": "load_from", "terms": "atom_probe/reconstruction/program__attr_version"},
+ "/ENTRY[entry*]/atom_probe/reconstruction/crystallographic_calibration": {"fun": "load_from", "terms": "atom_probe/reconstruction/crystallographic_calibration"},
+ "/ENTRY[entry*]/atom_probe/reconstruction/parameter": {"fun": "load_from", "terms": "atom_probe/reconstruction/parameter"},
+ "/ENTRY[entry*]/atom_probe/reconstruction/protocol_name": {"fun": "load_from", "terms": "atom_probe/reconstruction/protocol_name"}}
+
+# NeXus concept specific mapping tables which require special treatment as the current
+# NOMAD OASIS custom schema implementation delivers them as a list of dictionaries instead
+# of a directly flattenable list of keyword, value pairs
+
+NxUserFromListOfDict = {"/ENTRY[entry*]/USER[user*]/name": {"fun": "load_from", "terms": "name"},
+ "/ENTRY[entry*]/USER[user*]/affiliation": {"fun": "load_from", "terms": "affiliation"},
+ "/ENTRY[entry*]/USER[user*]/address": {"fun": "load_from", "terms": "address"},
+ "/ENTRY[entry*]/USER[user*]/email": {"fun": "load_from", "terms": "email"},
+ "/ENTRY[entry*]/USER[user*]/orcid": {"fun": "load_from", "terms": "orcid"},
+ "/ENTRY[entry*]/USER[user*]/orcid_platform": {"fun": "load_from", "terms": "orcid_platform"},
+ "/ENTRY[entry*]/USER[user*]/telephone_number": {"fun": "load_from", "terms": "telephone_number"},
+ "/ENTRY[entry*]/USER[user*]/role": {"fun": "load_from", "terms": "role"},
+ "/ENTRY[entry*]/USER[user*]/social_media_name": {"fun": "load_from", "terms": "social_media_name"},
+ "/ENTRY[entry*]/USER[user*]/social_media_platform": {"fun": "load_from", "terms": "social_media_platform"}}
+
+# LEAP6000 can use up to two lasers and voltage pulsing (both at the same time?)
+NxPulserFromListOfDict = {"/ENTRY[entry*]/atom_probe/pulser/SOURCE[source*]/name": {"fun": "load_from", "terms": "name"},
+ "/ENTRY[entry*]/atom_probe/pulser/SOURCE[source*]/power": {"fun": "load_from", "terms": "power"},
+ "/ENTRY[entry*]/atom_probe/pulser/SOURCE[source*]/pulse_energy": {"fun": "load_from", "terms": "pulse_energy"},
+ "/ENTRY[entry*]/atom_probe/pulser/SOURCE[source*]/wavelength": {"fun": "load_from", "terms": "wavelength"}}
diff --git a/pynxtools/dataconverter/readers/apm/reader.py b/pynxtools/dataconverter/readers/apm/reader.py
index 651100fd1..2e946257f 100644
--- a/pynxtools/dataconverter/readers/apm/reader.py
+++ b/pynxtools/dataconverter/readers/apm/reader.py
@@ -23,22 +23,25 @@
from pynxtools.dataconverter.readers.base.reader import BaseReader
-from pynxtools.dataconverter.readers.apm.utils.apm_use_case_selector \
+from pynxtools.dataconverter.readers.apm.utils.apm_define_io_cases \
import ApmUseCaseSelector
-from pynxtools.dataconverter.readers.apm.utils.apm_generic_eln_io \
+from pynxtools.dataconverter.readers.apm.utils.apm_load_deployment_specifics \
+ import NxApmNomadOasisConfigurationParser
+
+from pynxtools.dataconverter.readers.apm.utils.apm_load_generic_eln \
import NxApmNomadOasisElnSchemaParser
-from pynxtools.dataconverter.readers.apm.utils.apm_reconstruction_io \
+from pynxtools.dataconverter.readers.apm.utils.apm_load_reconstruction \
import ApmReconstructionParser
-from pynxtools.dataconverter.readers.apm.utils.apm_ranging_io \
+from pynxtools.dataconverter.readers.apm.utils.apm_load_ranging \
import ApmRangingDefinitionsParser
-from pynxtools.dataconverter.readers.apm.utils.apm_nexus_plots \
+from pynxtools.dataconverter.readers.apm.utils.apm_create_nx_default_plots \
import apm_default_plot_generator
-from pynxtools.dataconverter.readers.apm.utils.apm_example_data \
+from pynxtools.dataconverter.readers.apm.utils.apm_generate_synthetic_data \
import ApmCreateExampleData
# this apm parser combines multiple sub-parsers
@@ -103,6 +106,12 @@ def read(self,
print("No input file defined for eln data !")
return {}
+ print("Parse (meta)data coming from a configuration that specific OASIS...")
+ if len(case.cfg) == 1:
+ nx_apm_cfg = NxApmNomadOasisConfigurationParser(case.cfg[0], entry_id)
+ nx_apm_cfg.report(template)
+ # having and or using a deployment-specific configuration is optional
+
print("Parse (numerical) data and metadata from ranging definitions file...")
if len(case.reconstruction) == 1:
nx_apm_recon = ApmReconstructionParser(case.reconstruction[0], entry_id)
@@ -120,13 +129,10 @@ def read(self,
print("Create NeXus default plottable data...")
apm_default_plot_generator(template, n_entries)
- debugging = False
- if debugging is True:
- print("Reporting state of template before passing to HDF5 writing...")
- for keyword in template.keys():
- print(keyword)
- # print(type(template[keyword]))
- # print(template[keyword])
+ # print("Reporting state of template before passing to HDF5 writing...")
+ # for keyword in template.keys():
+ # print(keyword)
+ # print(template[keyword])
print("Forward instantiated template to the NXS writer...")
return template
diff --git a/pynxtools/dataconverter/readers/apm/utils/apm_nexus_plots.py b/pynxtools/dataconverter/readers/apm/utils/apm_create_nx_default_plots.py
similarity index 100%
rename from pynxtools/dataconverter/readers/apm/utils/apm_nexus_plots.py
rename to pynxtools/dataconverter/readers/apm/utils/apm_create_nx_default_plots.py
diff --git a/pynxtools/dataconverter/readers/apm/utils/apm_use_case_selector.py b/pynxtools/dataconverter/readers/apm/utils/apm_define_io_cases.py
similarity index 65%
rename from pynxtools/dataconverter/readers/apm/utils/apm_use_case_selector.py
rename to pynxtools/dataconverter/readers/apm/utils/apm_define_io_cases.py
index 2819281ba..26a73a1e9 100644
--- a/pynxtools/dataconverter/readers/apm/utils/apm_use_case_selector.py
+++ b/pynxtools/dataconverter/readers/apm/utils/apm_define_io_cases.py
@@ -36,11 +36,21 @@ def __init__(self, file_paths: Tuple[str] = None):
eln injects additional metadata and eventually numerical data.
"""
self.case: Dict[str, list] = {}
+ self.eln: List[str] = []
+ self.cfg: List[str] = []
+ self.reconstruction: List[str] = []
+ self.ranging: List[str] = []
self.is_valid = False
self.supported_mime_types = [
"pos", "epos", "apt", "rrng", "rng", "txt", "yaml", "yml"]
for mime_type in self.supported_mime_types:
self.case[mime_type] = []
+
+ self.sort_files_by_mime_type(file_paths)
+ self.check_validity_of_file_combinations()
+
+ def sort_files_by_mime_type(self, file_paths: Tuple[str] = None):
+ """Sort all input-files based on their mimetype to prepare validity check."""
for file_name in file_paths:
index = file_name.lower().rfind(".")
if index >= 0:
@@ -48,15 +58,23 @@ def __init__(self, file_paths: Tuple[str] = None):
if suffix in self.supported_mime_types:
if file_name not in self.case[suffix]:
self.case[suffix].append(file_name)
- recon_input = 0
- range_input = 0
+
+ def check_validity_of_file_combinations(self):
+ """Check if this combination of types of files is supported."""
+ recon_input = 0 # reconstruction relevant file e.g. POS, ePOS, APT
+ range_input = 0 # ranging definition file, e.g. RNG, RRNG
+ other_input = 0 # generic ELN or OASIS-specific configurations
for mime_type, value in self.case.items():
if mime_type in ["pos", "epos", "apt"]:
recon_input += len(value)
- if mime_type in ["rrng", "rng", "txt"]:
+ elif mime_type in ["rrng", "rng", "txt"]:
range_input += len(value)
- eln_input = len(self.case["yaml"]) + len(self.case["yml"])
- if (recon_input == 1) and (range_input == 1) and (eln_input == 1):
+ elif mime_type in ["yaml", "yml"]:
+ other_input += len(value)
+ else:
+ continue
+
+ if (recon_input == 1) and (range_input == 1) and (1 <= other_input <= 2):
self.is_valid = True
self.reconstruction: List[str] = []
self.ranging: List[str] = []
@@ -64,6 +82,12 @@ def __init__(self, file_paths: Tuple[str] = None):
self.reconstruction += self.case[mime_type]
for mime_type in ["rrng", "rng", "txt"]:
self.ranging += self.case[mime_type]
- self.eln: List[str] = []
+ yml: List[str] = []
for mime_type in ["yaml", "yml"]:
- self.eln += self.case[mime_type]
+ yml += self.case[mime_type]
+ for entry in yml:
+ if entry.endswith(".oasis.specific.yaml") \
+ or entry.endswith(".oasis.specific.yml"):
+ self.cfg += [entry]
+ else:
+ self.eln += [entry]
diff --git a/pynxtools/dataconverter/readers/apm/utils/apm_example_data.py b/pynxtools/dataconverter/readers/apm/utils/apm_generate_synthetic_data.py
similarity index 99%
rename from pynxtools/dataconverter/readers/apm/utils/apm_example_data.py
rename to pynxtools/dataconverter/readers/apm/utils/apm_generate_synthetic_data.py
index 47c63f8f3..c34d30f7b 100644
--- a/pynxtools/dataconverter/readers/apm/utils/apm_example_data.py
+++ b/pynxtools/dataconverter/readers/apm/utils/apm_generate_synthetic_data.py
@@ -45,7 +45,7 @@
from pynxtools.dataconverter.readers.apm.utils.apm_versioning \
import NX_APM_ADEF_NAME, NX_APM_ADEF_VERSION, NX_APM_EXEC_NAME, NX_APM_EXEC_VERSION
-from pynxtools.dataconverter.readers.apm.utils.apm_ranging_io \
+from pynxtools.dataconverter.readers.apm.utils.apm_load_ranging \
import add_unknown_iontype
diff --git a/pynxtools/dataconverter/readers/apm/utils/apm_generic_eln_io.py b/pynxtools/dataconverter/readers/apm/utils/apm_generic_eln_io.py
deleted file mode 100644
index 41677a1eb..000000000
--- a/pynxtools/dataconverter/readers/apm/utils/apm_generic_eln_io.py
+++ /dev/null
@@ -1,409 +0,0 @@
-#
-# Copyright The NOMAD Authors.
-#
-# This file is part of NOMAD. See https://nomad-lab.eu for further info.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-#
-"""Wrapping multiple parsers for vendor files with NOMAD OASIS/ELN/YAML metadata."""
-
-# pylint: disable=no-member
-
-import flatdict as fd
-
-import numpy as np
-
-import yaml
-
-from ase.data import chemical_symbols
-
-from pynxtools.dataconverter.readers.apm.utils.apm_versioning \
- import NX_APM_ADEF_NAME, NX_APM_ADEF_VERSION, NX_APM_EXEC_NAME, NX_APM_EXEC_VERSION
-
-
-class NxApmNomadOasisElnSchemaParser: # pylint: disable=too-few-public-methods
- """Parse eln_data.yaml dump file content generated from a NOMAD OASIS YAML.
-
- This parser implements a design where an instance of a specific NOMAD
- custom schema ELN template is used to fill pieces of information which
- are typically not contained in files from technology partners
- (e.g. pos, epos, apt, rng, rrng, ...). Until now, this custom schema and
- the NXapm application definition do not use a fully harmonized vocabulary.
- Therefore, the here hardcoded implementation is needed which maps specifically
- named pieces of information from the custom schema instance on named fields
- in an instance of NXapm
-
- The functionalities in this ELN YAML parser do not check if the
- instantiated template yields an instance which is compliant NXapm.
- Instead, this task is handled by the generic part of the dataconverter
- during the verification of the template dictionary.
- """
-
- def __init__(self, file_name: str, entry_id: int):
- print(f"Extracting data from ELN file: {file_name}")
- if (file_name.rsplit('/', 1)[-1].startswith("eln_data")
- or file_name.startswith("eln_data")) and entry_id > 0:
- self.entry_id = entry_id
- self.file_name = file_name
- with open(self.file_name, "r", encoding="utf-8") as stream:
- self.yml = fd.FlatDict(yaml.safe_load(stream), delimiter=":")
- else:
- self.entry_id = 1
- self.file_name = ""
- self.yml = {}
-
- def parse_entry(self, template: dict) -> dict:
- """Copy data in entry section."""
- # print("Parsing entry...")
- trg = f"/ENTRY[entry{self.entry_id}]/"
- src = "entry"
- if isinstance(self.yml[src], fd.FlatDict):
- if (self.yml[f"{src}:attr_version"] == NX_APM_ADEF_VERSION) \
- and (self.yml[f"{src}:definition"] == NX_APM_ADEF_NAME):
- template[f"{trg}@version"] = NX_APM_ADEF_VERSION
- template[f"{trg}definition"] = NX_APM_ADEF_NAME
- template[f"{trg}PROGRAM[program1]/program"] = NX_APM_EXEC_NAME
- template[f"{trg}PROGRAM[program1]/program/@version"] = NX_APM_EXEC_VERSION
- if ("program" in self.yml[src].keys()) \
- and ("program__attr_version" in self.yml[src].keys()):
- template[f"{trg}PROGRAM[program2]/program"] \
- = self.yml[f"{src}:program"]
- template[f"{trg}PROGRAM[program2]/program/@version"] \
- = self.yml[f"{src}:program__attr_version"]
-
- required_field_names = ["experiment_identifier", "run_number",
- "operation_mode"]
- for field_name in required_field_names:
- if field_name in self.yml[src].keys():
- template[f"{trg}{field_name}"] = self.yml[f"{src}:{field_name}"]
-
- optional_field_names = ["start_time", "end_time",
- "experiment_description", "experiment_documentation"]
- for field_name in optional_field_names:
- if field_name in self.yml[src].keys():
- template[f"{trg}{field_name}"] = self.yml[f"{src}:{field_name}"]
-
- return template
-
- def parse_user(self, template: dict) -> dict:
- """Copy data in user section."""
- # print("Parsing user...")
- src = "user"
- if "user" in self.yml.keys():
- if len(self.yml[src]) >= 1:
- user_id = 1
- for user_list in self.yml[src]:
- trg = f"/ENTRY[entry{self.entry_id}]/USER[user{user_id}]/"
-
- required_field_names = ["name"]
- for field_name in required_field_names:
- if field_name in user_list.keys():
- template[f"{trg}{field_name}"] = user_list[field_name]
-
- optional_field_names = ["email", "affiliation", "address",
- "orcid", "orcid_platform",
- "telephone_number", "role",
- "social_media_name", "social_media_platform"]
- for field_name in optional_field_names:
- if field_name in user_list.keys():
- template[f"{trg}{field_name}"] = user_list[field_name]
- user_id += 1
-
- return template
-
- def parse_specimen(self, template: dict) -> dict:
- """Copy data in specimen section."""
- # print("Parsing sample...")
- src = "specimen"
- trg = f"/ENTRY[entry{self.entry_id}]/specimen/"
- if isinstance(self.yml[src], fd.FlatDict):
- if (isinstance(self.yml[f"{src}:atom_types"], list)) \
- and (len(self.yml[src + ":atom_types"]) >= 1):
- atom_types_are_valid = True
- for symbol in self.yml[f"{src}:atom_types"]:
- valid = isinstance(symbol, str) \
- and (symbol in chemical_symbols) and (symbol != "X")
- if valid is False:
- atom_types_are_valid = False
- break
- if atom_types_are_valid is True:
- template[f"{trg}atom_types"] \
- = ", ".join(list(self.yml[f"{src}:atom_types"]))
-
- required_field_names = ["name", "sample_history", "preparation_date"]
- for field_name in required_field_names:
- if field_name in self.yml[src].keys():
- template[f"{trg}{field_name}"] = self.yml[f"{src}:{field_name}"]
-
- optional_field_names = ["short_title", "description"]
- for field_name in optional_field_names:
- if field_name in self.yml[src].keys():
- template[f"{trg}{field_name}"] = self.yml[f"{src}:{field_name}"]
-
- return template
-
- def parse_instrument_header(self, template: dict) -> dict:
- """Copy data in instrument_header section."""
- # print("Parsing instrument header...")
- src = "atom_probe"
- trg = f"/ENTRY[entry{self.entry_id}]/atom_probe/"
- if isinstance(self.yml[src], fd.FlatDict):
- required_field_names = ["instrument_name", "status"]
- for field_name in required_field_names:
- if field_name in self.yml[src].keys():
- template[f"{trg}{field_name}"] = self.yml[f"{src}:{field_name}"]
- optional_field_names = ["location"]
- for field_name in optional_field_names:
- if field_name in self.yml[src].keys():
- template[f"{trg}{field_name}"] = self.yml[f"{src}:{field_name}"]
-
- float_field_names = ["flight_path_length", "field_of_view"]
- for field_name in float_field_names:
- if (f"{field_name}:value" in self.yml[src].keys()) \
- and (f"{field_name}:unit" in self.yml[src].keys()):
- template[f"{trg}{field_name}"] \
- = np.float64(self.yml[f"{src}:{field_name}:value"])
- template[f"{trg}{field_name}/@units"] \
- = self.yml[f"{src}:{field_name}:unit"]
-
- return template
-
- def parse_fabrication(self, template: dict) -> dict:
- """Copy data in fabrication section."""
- # print("Parsing fabrication...")
- src = "atom_probe"
- trg = f"/ENTRY[entry{self.entry_id}]/atom_probe/FABRICATION[fabrication]/"
- required_field_names = ["fabrication_vendor", "fabrication_model"]
- for field_name in required_field_names:
- if field_name in self.yml[src].keys():
- suffix = field_name.replace("fabrication_", "")
- template[f"{trg}{suffix}"] = self.yml[f"{src}:{field_name}"]
-
- optional_field_names = ["fabrication_identifier", "fabrication_capabilities"]
- for field_name in optional_field_names:
- if field_name in self.yml[src].keys():
- suffix = field_name.replace("fabrication_", "")
- template[f"{trg}{suffix}"] = self.yml[f"{src}:{field_name}"]
-
- return template
-
- def parse_analysis_chamber(self, template: dict) -> dict:
- """Copy data in analysis_chamber section."""
- # print("Parsing analysis chamber...")
- src = "atom_probe"
- trg = f"/ENTRY[entry{self.entry_id}]/atom_probe/analysis_chamber/"
- float_field_names = ["analysis_chamber_pressure"]
- for field_name in float_field_names:
- if (f"{field_name}:value" in self.yml[src].keys()) \
- and (f"{field_name}:unit" in self.yml[src].keys()):
- suffix = field_name.replace("analysis_chamber_", "")
- template[f"{trg}{suffix}"] \
- = np.float64(self.yml[f"{src}:{field_name}:value"])
- template[f"{trg}{suffix}/@units"] = self.yml[f"{src}:{field_name}:unit"]
-
- return template
-
- def parse_reflectron(self, template: dict) -> dict:
- """Copy data in reflectron section."""
- # print("Parsing reflectron...")
- src = "atom_probe"
- trg = f"/ENTRY[entry{self.entry_id}]/atom_probe/REFLECTRON[reflectron]/"
- required_field_names = ["reflectron_applied"]
- for field_name in required_field_names:
- if field_name in self.yml[src].keys():
- suffix = field_name.replace("reflectron_", "")
- template[f"{trg}{suffix}"] = self.yml[f"{src}:{field_name}"]
-
- return template
-
- def parse_local_electrode(self, template: dict) -> dict:
- """Copy data in local_electrode section."""
- # print("Parsing local electrode...")
- src = "atom_probe"
- trg = f"/ENTRY[entry{self.entry_id}]/atom_probe/local_electrode/"
- required_field_names = ["local_electrode_name"]
- for field_name in required_field_names:
- if field_name in self.yml[src].keys():
- suffix = field_name.replace("local_electrode_", "")
- template[f"{trg}{suffix}"] = self.yml[f"{src}:{field_name}"]
-
- return template
-
- def parse_detector(self, template: dict) -> dict:
- """Copy data in ion_detector section."""
- # print("Parsing detector...")
- src = "atom_probe"
- trg = f"/ENTRY[entry{self.entry_id}]/atom_probe/ion_detector/"
- required_field_names = ["ion_detector_type", "ion_detector_name",
- "ion_detector_model", "ion_detector_serial_number"]
- for field_name in required_field_names:
- if field_name in self.yml[src].keys():
- suffix = field_name.replace("ion_detector_", "")
- template[f"{trg}{suffix}"] = self.yml[f"{src}:{field_name}"]
-
- return template
-
- def parse_stage_lab(self, template: dict) -> dict:
- """Copy data in stage lab section."""
- # print("Parsing stage_lab...")
- src = "atom_probe"
- trg = f"/ENTRY[entry{self.entry_id}]/atom_probe/stage_lab/"
- if isinstance(self.yml[src], fd.FlatDict):
- required_value_fields = ["stage_lab_base_temperature"]
- for field_name in required_value_fields:
- if (f"{field_name}:value" in self.yml[src].keys()) \
- and (f"{field_name}:unit" in self.yml[src].keys()):
- suffix = field_name.replace("stage_lab_", "")
- template[f"{trg}{suffix}"] \
- = np.float64(self.yml[f"{src}:{field_name}:value"])
- template[f"{trg}{suffix}/@units"] \
- = self.yml[f"{src}:{field_name}:unit"]
-
- return template
-
- def parse_specimen_monitoring(self, template: dict) -> dict:
- """Copy data in specimen_monitoring section."""
- # print("Parsing specimen_monitoring...")
- src = "atom_probe"
- trg = f"/ENTRY[entry{self.entry_id}]/atom_probe/specimen_monitoring/"
- if isinstance(self.yml[src], fd.FlatDict):
- required_field_names = ["specimen_monitoring_detection_rate"]
- for field_name in required_field_names:
- if field_name in self.yml[src].keys():
- template[f"{trg}detection_rate"] \
- = np.float64(self.yml[f"{src}:{field_name}"])
- float_field_names = ["specimen_monitoring_initial_radius",
- "specimen_monitoring_shank_angle"]
- for float_field_name in float_field_names:
- if (f"{float_field_name}:value" in self.yml[src].keys()) \
- and (f"{float_field_name}:unit" in self.yml[src].keys()):
- suffix = float_field_name.replace("specimen_monitoring_", "")
- template[f"{trg}{suffix}"] \
- = np.float64(self.yml[f"{src}:{float_field_name}:value"])
- template[f"{trg}{suffix}/@units"] \
- = self.yml[f"{src}:{float_field_name}:unit"]
-
- return template
-
- def parse_control_software(self, template: dict) -> dict:
- """Copy data in control software section."""
- # print("Parsing control software...")
- src = "atom_probe"
- trg = f"/ENTRY[entry{self.entry_id}]/atom_probe/control_software/"
- if isinstance(self.yml[src], fd.FlatDict):
- prefix = "control_software"
- if (f"{prefix}_program" in self.yml[src].keys()) \
- and (f"{prefix}_program__attr_version" in self.yml[src].keys()):
- template[f"{trg}PROGRAM[program1]/program"] \
- = self.yml[f"{src}:{prefix}_program"]
- template[f"{trg}PROGRAM[program1]/program/@version"] \
- = self.yml[f"{src}:{prefix}_program__attr_version"]
-
- return template
-
- def parse_pulser(self, template: dict) -> dict:
- """Copy data in pulser section."""
- # print("Parsing pulser...")
- src = "atom_probe:pulser"
- trg = f"/ENTRY[entry{self.entry_id}]/atom_probe/pulser/"
- if isinstance(self.yml[src], fd.FlatDict):
- if "pulse_mode" in self.yml[src].keys():
- pulse_mode = self.yml[f"{src}:pulse_mode"]
- template[f"{trg}pulse_mode"] = pulse_mode
- else: # can not parse selectively as pulse_mode was not documented
- return template
-
- if "pulse_fraction" in self.yml[src].keys():
- template[f"{trg}pulse_fraction"] \
- = np.float64(self.yml[f"{src}:pulse_fraction"])
-
- float_field_names = ["pulse_frequency"]
- for field_name in float_field_names:
- if (f"{field_name}:value" in self.yml[src].keys()) \
- and (f"{field_name}:unit" in self.yml[src].keys()):
- template[f"{trg}{field_name}"] \
- = np.float64(self.yml[f"{src}:{field_name}:value"])
- template[f"{trg}{field_name}/@units"] \
- = self.yml[f"{src}:{field_name}:unit"]
- # additionally required data for laser and laser_and_voltage runs
- if pulse_mode != "voltage":
- trg = f"/ENTRY[entry{self.entry_id}]/atom_probe/" \
- f"pulser/SOURCE[laser_source1]/"
- if "laser_source_name" in self.yml[src].keys():
- template[f"{trg}name"] = self.yml[f"{src}:laser_source_name"]
-
- float_field_names = ["laser_source_wavelength",
- "laser_source_power",
- "laser_source_pulse_energy"]
- for field_name in float_field_names:
- if (f"{field_name}:value" in self.yml[src].keys()) \
- and (f"{field_name}:unit" in self.yml[src].keys()):
- suffix = field_name.replace("laser_source_", "")
- template[f"{trg}{suffix}"] \
- = np.float64(self.yml[f"{src}:{field_name}:value"])
- template[f"{trg}{suffix}/@units"] \
- = self.yml[f"{src}:{field_name}:unit"]
-
- return template
-
- def parse_reconstruction(self, template: dict) -> dict:
- """Copy data in reconstruction section."""
- # print("Parsing reconstruction...")
- src = "reconstruction"
- trg = f"/ENTRY[entry{self.entry_id}]/atom_probe/reconstruction/"
- if ("program" in self.yml[src].keys()) \
- and ("program__attr_version" in self.yml[src].keys()):
- template[f"{trg}PROGRAM[program1]/program"] \
- = self.yml[f"{src}:program"]
- template[f"{trg}PROGRAM[program1]/program/@version"] \
- = self.yml[f"{src}:program__attr_version"]
-
- required_field_names = ["protocol_name", "parameter",
- "crystallographic_calibration"]
- for field_name in required_field_names:
- if field_name in self.yml[src].keys():
- template[f"{trg}{field_name}"] = self.yml[f"{src}:{field_name}"]
-
- return template
-
- def parse_ranging(self, template: dict) -> dict:
- """Copy data in ranging section."""
- # print("Parsing ranging...")
- src = "ranging"
- trg = f"/ENTRY[entry{self.entry_id}]/atom_probe/ranging/"
- if ("program" in self.yml[src].keys()) \
- and ("program__attr_version" in self.yml[src].keys()):
- template[f"{trg}PROGRAM[program1]/program"] = self.yml[f"{src}:program"]
- template[f"{trg}PROGRAM[program1]/program/@version"] \
- = self.yml[f"{src}:program__attr_version"]
-
- return template
-
- def report(self, template: dict) -> dict:
- """Copy data from self into template the appdef instance."""
- self.parse_entry(template)
- self.parse_user(template)
- self.parse_specimen(template)
- self.parse_instrument_header(template)
- self.parse_fabrication(template)
- self.parse_analysis_chamber(template)
- self.parse_reflectron(template)
- self.parse_local_electrode(template)
- self.parse_detector(template)
- self.parse_stage_lab(template)
- self.parse_specimen_monitoring(template)
- self.parse_control_software(template)
- self.parse_pulser(template)
- self.parse_reconstruction(template)
- self.parse_ranging(template)
- return template
diff --git a/pynxtools/dataconverter/readers/apm/utils/apm_load_deployment_specifics.py b/pynxtools/dataconverter/readers/apm/utils/apm_load_deployment_specifics.py
new file mode 100644
index 000000000..87dc05950
--- /dev/null
+++ b/pynxtools/dataconverter/readers/apm/utils/apm_load_deployment_specifics.py
@@ -0,0 +1,57 @@
+#
+# Copyright The NOMAD Authors.
+#
+# This file is part of NOMAD. See https://nomad-lab.eu for further info.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+"""Load deployment-specific quantities."""
+
+# pylint: disable=no-member
+
+import flatdict as fd
+
+import yaml
+
+from pynxtools.dataconverter.readers.apm.map_concepts.apm_deployment_specifics_to_nx_map \
+ import NxApmDeploymentSpecificInput
+
+from pynxtools.dataconverter.readers.shared.map_concepts.mapping_functors \
+ import apply_modifier, variadic_path_to_specific_path
+
+
+class NxApmNomadOasisConfigurationParser: # pylint: disable=too-few-public-methods
+ """Parse deployment specific configuration."""
+
+ def __init__(self, file_name: str, entry_id: int):
+ print(f"Extracting data from deployment specific configuration file: {file_name}")
+ if (file_name.rsplit('/', 1)[-1].endswith(".oasis.specific.yaml")
+ or file_name.endswith(".oasis.specific.yml")) and entry_id > 0:
+ self.entry_id = entry_id
+ self.file_name = file_name
+ with open(self.file_name, "r", encoding="utf-8") as stream:
+ self.yml = fd.FlatDict(yaml.safe_load(stream), delimiter="/")
+ else:
+ self.entry_id = 1
+ self.file_name = ""
+ self.yml = {}
+
+ def report(self, template: dict) -> dict:
+ """Copy data from configuration applying mapping functors."""
+ for nx_path, modifier in NxApmDeploymentSpecificInput.items():
+ if nx_path not in ("IGNORE", "UNCLEAR"):
+ trg = variadic_path_to_specific_path(nx_path, [self.entry_id, 1])
+ res = apply_modifier(modifier, self.yml)
+ if res is not None:
+ template[trg] = res
+ return template
diff --git a/pynxtools/dataconverter/readers/apm/utils/apm_load_generic_eln.py b/pynxtools/dataconverter/readers/apm/utils/apm_load_generic_eln.py
new file mode 100644
index 000000000..ed36eec23
--- /dev/null
+++ b/pynxtools/dataconverter/readers/apm/utils/apm_load_generic_eln.py
@@ -0,0 +1,175 @@
+#
+# Copyright The NOMAD Authors.
+#
+# This file is part of NOMAD. See https://nomad-lab.eu for further info.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+"""Wrapping multiple parsers for vendor files with NOMAD OASIS/ELN/YAML metadata."""
+
+# pylint: disable=no-member,duplicate-code,too-many-nested-blocks
+
+import flatdict as fd
+
+import yaml
+
+from ase.data import chemical_symbols
+
+from pynxtools.dataconverter.readers.apm.map_concepts.apm_eln_to_nx_map \
+ import NxApmElnInput, NxUserFromListOfDict
+
+from pynxtools.dataconverter.readers.shared.map_concepts.mapping_functors \
+ import variadic_path_to_specific_path, apply_modifier
+
+from pynxtools.dataconverter.readers.apm.utils.apm_parse_composition_table \
+ import parse_composition_table
+
+
+class NxApmNomadOasisElnSchemaParser: # pylint: disable=too-few-public-methods
+ """Parse eln_data.yaml dump file content generated from a NOMAD OASIS YAML.
+
+ This parser implements a design where an instance of a specific NOMAD
+ custom schema ELN template is used to fill pieces of information which
+ are typically not contained in files from technology partners
+ (e.g. pos, epos, apt, rng, rrng, ...). Until now, this custom schema and
+ the NXapm application definition do not use a fully harmonized vocabulary.
+ Therefore, the here hardcoded implementation is needed which maps specifically
+ named pieces of information from the custom schema instance on named fields
+ in an instance of NXapm
+
+ The functionalities in this ELN YAML parser do not check if the
+ instantiated template yields an instance which is compliant NXapm.
+ Instead, this task is handled by the generic part of the dataconverter
+ during the verification of the template dictionary.
+ """
+
+ def __init__(self, file_name: str, entry_id: int):
+ print(f"Extracting data from ELN file: {file_name}")
+ if (file_name.rsplit('/', 1)[-1].startswith("eln_data")
+ or file_name.startswith("eln_data")) and entry_id > 0:
+ self.entry_id = entry_id
+ self.file_name = file_name
+ with open(self.file_name, "r", encoding="utf-8") as stream:
+ self.yml = fd.FlatDict(yaml.safe_load(stream), delimiter="/")
+ else:
+ self.entry_id = 1
+ self.file_name = ""
+ self.yml = {}
+
+ def parse_sample_composition(self, template: dict) -> dict:
+ """Interpret human-readable ELN input to generate consistent composition table."""
+ src = "sample/composition"
+ if src in self.yml.keys():
+ if isinstance(self.yml[src], list):
+ dct = parse_composition_table(self.yml[src])
+
+ prfx = f"/ENTRY[entry{self.entry_id}]/sample/" \
+ f"CHEMICAL_COMPOSITION[chemical_composition]"
+ unit = "at.-%" # the assumed default unit
+ if "normalization" in dct:
+ if dct["normalization"] in ["%", "at%", "at-%", "at.-%", "ppm", "ppb"]:
+ unit = "at.-%"
+ template[f"{prfx}/normalization"] = "atom_percent"
+ elif dct["normalization"] in ["wt%", "wt-%", "wt.-%"]:
+ unit = "wt.-%"
+ template[f"{prfx}/normalization"] = "weight_percent"
+ else:
+ return template
+ ion_id = 1
+ for symbol in chemical_symbols[1::]:
+ # ase convention, chemical_symbols[0] == "X"
+ # to use ordinal number for indexing
+ if symbol in dct:
+ if isinstance(dct[symbol], tuple) and len(dct[symbol]) == 2:
+ trg = f"{prfx}/ION[ion{ion_id}]"
+ template[f"{trg}/name"] = symbol
+ template[f"{trg}/composition"] = dct[symbol][0]
+ template[f"{trg}/composition/@units"] = unit
+ if dct[symbol][1] is not None:
+ template[f"{trg}/composition_error"] = dct[symbol][1]
+ template[f"{trg}/composition_error/@units"] = unit
+ ion_id += 1
+ return template
+
+ def parse_user_section(self, template: dict) -> dict:
+ """Copy data from user section into template."""
+ src = "user"
+ if src in self.yml.keys():
+ if isinstance(self.yml[src], list):
+ if all(isinstance(entry, dict) for entry in self.yml[src]) is True:
+ user_id = 1
+ # custom schema delivers a list of dictionaries...
+ for user_dict in self.yml[src]:
+ # ... for each of them inspect for fields mappable on NeXus
+ identifier = [self.entry_id, user_id]
+ # identifier to get instance NeXus path from variadic NeXus path
+ # try to find all quantities on the left-hand side of the mapping
+ # table and check if we can find these
+ for nx_path, modifier in NxUserFromListOfDict.items():
+ if nx_path not in ("IGNORE", "UNCLEAR"):
+ trg = variadic_path_to_specific_path(nx_path, identifier)
+ res = apply_modifier(modifier, user_dict)
+ if res is not None:
+ template[trg] = res
+ user_id += 1
+ return template
+
+ def parse_laser_pulser_details(self, template: dict) -> dict:
+ """Copy data in pulser section."""
+ # additional laser-specific details only relevant when the laser was used
+ src = "atom_probe/pulser/pulse_mode"
+ if src in self.yml.keys():
+ if self.yml[src] == "voltage":
+ return template
+ else:
+ return template
+ src = "atom_probe/pulser/laser_source"
+ if src in self.yml.keys():
+ if isinstance(self.yml[src], list):
+ if all(isinstance(entry, dict) for entry in self.yml[src]) is True:
+ laser_id = 1
+ # custom schema delivers a list of dictionaries...
+ trg = f"/ENTRY[entry{self.entry_id}]/atom_probe/pulser" \
+ f"/SOURCE[source{laser_id}]"
+ for laser_dict in self.yml[src]:
+ if "name" in laser_dict.keys():
+ template[f"{trg}/name"] = laser_dict["name"]
+ quantities = ["power", "pulse_energy", "wavelength"]
+ for quant in quantities:
+ if isinstance(laser_dict[quant], dict):
+ if ("value" in laser_dict[quant].keys()) \
+ and ("unit" in laser_dict[quant].keys()):
+ template[f"{trg}/{quant}"] \
+ = laser_dict[quant]["value"]
+ template[f"{trg}/{quant}/@units"] \
+ = laser_dict[quant]["unit"]
+ laser_id += 1
+ return template
+
+ def parse_other_sections(self, template: dict) -> dict:
+ """Copy data from custom schema into template."""
+ for nx_path, modifier in NxApmElnInput.items():
+ if nx_path not in ("IGNORE", "UNCLEAR"):
+ trg = variadic_path_to_specific_path(nx_path, [self.entry_id, 1])
+ res = apply_modifier(modifier, self.yml)
+ if res is not None:
+ template[trg] = res
+ return template
+
+ def report(self, template: dict) -> dict:
+ """Copy data from self into template the appdef instance."""
+ self.parse_sample_composition(template)
+ self.parse_user_section(template)
+ self.parse_laser_pulser_details(template)
+ self.parse_other_sections(template)
+ return template
diff --git a/pynxtools/dataconverter/readers/apm/utils/apm_ranging_io.py b/pynxtools/dataconverter/readers/apm/utils/apm_load_ranging.py
similarity index 100%
rename from pynxtools/dataconverter/readers/apm/utils/apm_ranging_io.py
rename to pynxtools/dataconverter/readers/apm/utils/apm_load_ranging.py
diff --git a/pynxtools/dataconverter/readers/apm/utils/apm_reconstruction_io.py b/pynxtools/dataconverter/readers/apm/utils/apm_load_reconstruction.py
similarity index 100%
rename from pynxtools/dataconverter/readers/apm/utils/apm_reconstruction_io.py
rename to pynxtools/dataconverter/readers/apm/utils/apm_load_reconstruction.py
diff --git a/pynxtools/dataconverter/readers/apm/utils/apm_parse_composition_table.py b/pynxtools/dataconverter/readers/apm/utils/apm_parse_composition_table.py
new file mode 100644
index 000000000..cf8f2bc56
--- /dev/null
+++ b/pynxtools/dataconverter/readers/apm/utils/apm_parse_composition_table.py
@@ -0,0 +1,179 @@
+#
+# Copyright The NOMAD Authors.
+#
+# This file is part of NOMAD. See https://nomad-lab.eu for further info.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+"""Parse human-readable composition infos from set of ELN string text fields."""
+
+# pylint: disable=no-member,too-many-branches
+
+import re
+
+import numpy as np
+
+from ase.data import chemical_symbols
+
+
+def parse_human_readable_composition_case_one(symbol):
+ """Handle specification of matrix or remainder element."""
+ return ("define_matrix", symbol, None, None, None)
+
+
+def parse_human_readable_composition_case_two(args, symbol):
+ """Handle case element and at.-% composition, no comp. stdev."""
+ if args[1] in ["rem", "remainder", "matrix"]:
+ return ("define_matrix", symbol, None, None, None)
+ composition = re.match(r"[-+]?(?:\d*\.*\d+)", args[1])
+ if composition is not None:
+ fraction = np.float64(composition[0])
+ return ("add_element", symbol, fraction, None, "at.-%")
+ return (None, None, None, None, None)
+
+
+def parse_human_readable_composition_case_three(human_input, args, symbol):
+ """Handle case element with different than default normalization, no comp. stdev."""
+ composition = re.findall(r"[-+]?(?:\d*\.*\d+)", human_input)
+ if len(composition) == 1:
+ fraction = np.float64(composition[0])
+ normalization = args[2]
+ if normalization in ["%", "at%", "at-%", "at.-%"]:
+ return ("add_element", symbol, fraction, None, "at.-%")
+ if normalization in ["wt%", "wt-%", "wt.-%"]:
+ return ("add_element", symbol, fraction, None, "wt.-%")
+ if normalization == "ppm":
+ return ("add_element", symbol, fraction / 1.0e4, None, "at.-%")
+ if normalization == "ppb":
+ return ("add_element", symbol, fraction / 1.0e7, None, "at.-%")
+ return (None, None, None, None, None)
+
+
+def parse_human_readable_composition_case_four(human_input, symbol):
+ """Handle case at.-% normalization with comp. stdev."""
+ composition = re.findall(r"[-+]?(?:\d*\.*\d+)", human_input)
+ composition_error = human_input.count("+-")
+ if (len(composition) == 2) and (composition_error == 1):
+ fraction = np.float64(composition[0])
+ error = np.float64(composition[1])
+ return ("add_element", symbol, fraction, error, "at.-%")
+ return (None, None, None, None, None)
+
+
+def parse_human_readable_composition_case_five(human_input, args, symbol):
+ """Handle case with different than standard normalization and comp. stdev."""
+ composition = re.findall(r"[-+]?(?:\d*\.*\d+)", human_input)
+ if (len(composition) == 2) and (human_input.count("+-") == 1):
+ fraction = np.float64(composition[0])
+ error = np.float64(composition[1])
+ normalization = args[2]
+ if normalization in ["%", "at%", "at-%", "at.-%"]:
+ return ("add_element", symbol, fraction, error, "at.-%")
+ if normalization in ["wt%", "wt-%", "wt.-%"]:
+ return ("add_element", symbol, fraction, error, "wt.-%")
+ if normalization == "ppm":
+ return ("add_element", symbol, fraction / 1.0e4, error / 1.0e4, "at.-%")
+ if normalization == "ppb":
+ return ("add_element", symbol, fraction / 1.0e7, error / 1.0e7, "at.-%")
+ return (None, None, None, None, None)
+
+
+def parse_human_readable_composition_information(eln_input):
+ """Identify instruction to parse from eln_input to define composition table."""
+ args = eln_input.split(" ")
+ if len(args) >= 1:
+ element_symbol = args[0]
+ # composition value argument fraction is always expected in percent
+ # i.e. human should have written 98 instead 0.98!
+ if (element_symbol != "X") and (element_symbol in chemical_symbols):
+ # case: "Mo"
+ if len(args) == 1:
+ return parse_human_readable_composition_case_one(
+ element_symbol)
+ # case: "Mo matrix" or "Mo 98.0", always assuming at.-%!
+ if len(args) == 2:
+ return parse_human_readable_composition_case_two(
+ args, element_symbol)
+ # case: "Mo 98 wt.-%", selectable at.-%, ppm, ppb, or wt.-%!
+ if len(args) == 3:
+ return parse_human_readable_composition_case_three(
+ eln_input, args, element_symbol)
+ # case: "Mo 98 +- 2", always assuming at.-%!
+ if len(args) == 4:
+ return parse_human_readable_composition_case_four(
+ eln_input, element_symbol)
+ # case: "Mo 98 wt.-% +- 2", selectable at.-%, ppm, ppb, or wt.-%!
+ if len(args) == 5:
+ return parse_human_readable_composition_case_five(
+ eln_input, args, element_symbol)
+ return (None, None, None, None, None)
+
+
+def parse_composition_table(composition_list):
+ """Check if all the entries in the composition list yield a valid composition table."""
+ composition_table = {}
+ # check that there are no contradictions or inconsistenc
+ for entry in composition_list:
+ instruction, element, composition, stdev, normalization \
+ = parse_human_readable_composition_information(entry)
+ # print(f"{instruction}, {element}, {composition}, {stdev}, {normalization}")
+
+ if instruction == "add_element":
+ if "normalization" not in composition_table:
+ if normalization is not None:
+ composition_table["normalization"] = normalization
+ else:
+ # as the normalization model is already defined, all following statements
+ # need to comply because we assume we are not allowed to mix atom and weight
+ # percent normalization in a composition_table
+ if normalization is not None:
+ if normalization != composition_table["normalization"]:
+ raise ValueError("Composition list is contradicting as it \
+ mixes atom- with weight-percent normalization!")
+
+ if element not in composition_table:
+ composition_table[element] = (composition, stdev)
+ else:
+ raise ValueError("Composition list is incorrectly formatted as if has \
+ at least multiple lines for the same element!")
+ continue
+ if instruction == "define_matrix":
+ if element not in composition_table:
+ composition_table[element] = (None, None)
+ # because the fraction is unclear at this point
+ else:
+ raise ValueError("Composition list is contradicting as it includes \
+ at least two statements what the matrix should be!")
+
+ # determine remaining fraction
+ total_fractions = 0.
+ remainder_element = None
+ for keyword, tpl in composition_table.items():
+ if keyword != "normalization":
+ if (tpl is not None) and (tpl != (None, None)):
+ total_fractions += tpl[0]
+ else:
+ remainder_element = keyword
+ # print(f"Total fractions {total_fractions}, remainder element {remainder_element}")
+ if remainder_element is None:
+ raise ValueError("Composition list inconsistent because either fractions for \
+ elements do not add up to 100. or no symbol for matrix defined!")
+
+ if composition_table: # means != {}
+ composition_table[remainder_element] = (1.0e2 - total_fractions, None)
+ # error propagation model required
+
+ # document if reporting as percent or fractional values
+ composition_table["percent"] = True
+
+ return composition_table
diff --git a/pynxtools/dataconverter/readers/ellips/reader.py b/pynxtools/dataconverter/readers/ellips/reader.py
index 58a921c2e..bd7c8bf19 100644
--- a/pynxtools/dataconverter/readers/ellips/reader.py
+++ b/pynxtools/dataconverter/readers/ellips/reader.py
@@ -19,14 +19,15 @@
import os
from typing import Tuple, Any
import math
+from importlib.metadata import version
import yaml
import pandas as pd
import numpy as np
-# import h5py
from pynxtools.dataconverter.readers.base.reader import BaseReader
from pynxtools.dataconverter.readers.ellips.mock import MockEllips
from pynxtools.dataconverter.helpers import extract_atom_types
from pynxtools.dataconverter.readers.utils import flatten_and_replace, FlattenSettings
+from pynxtools import get_nexus_version, get_nexus_version_hash
DEFAULT_HEADER = {'sep': '\t', 'skip': 0}
@@ -373,7 +374,7 @@ def write_scan_axis(name: str, values: list, units: str):
header["Instrument/angle_of_incidence"] = unique_angles
for axis in ["detection_angle", "incident_angle"]:
- write_scan_axis(axis, unique_angles, "degrees")
+ write_scan_axis(axis, unique_angles, "degree")
# Create mocked ellipsometry data template:
if is_mock:
@@ -416,7 +417,15 @@ def read(self,
template = populate_template_dict(header, template)
spectrum_type = header["Data"]["spectrum_type"]
- spectrum_unit = header["Data"]["spectrum_unit"]
+ if header["Data"]["spectrum_unit"] == "Angstroms":
+ spectrum_unit = "angstrom"
+ else:
+ spectrum_unit = header["Data"]["spectrum_unit"]
+ # MK:: Carola, Ron, Flo, Tamas, Sandor refactor the above-mentioned construct
+ # there has to be a unit parsing control logic already at the level of this reader
+ # because test-data.data has improper units like Angstroms or degrees
+ # the fix above prevents that these incorrect units are get just blindly carried
+ # over into the nxs file and thus causing nomas to fail
template[f"/ENTRY[entry]/plot/AXISNAME[{spectrum_type}]"] = \
{"link": f"/entry/data_collection/{spectrum_type}_spectrum"}
template[f"/ENTRY[entry]/data_collection/NAME_spectrum[{spectrum_type}_spectrum]/@units"] \
@@ -432,16 +441,19 @@ def read(self,
"link": "/entry/data_collection/measured_data",
"shape": np.index_exp[index, dindx, :]
}
- template[f"/ENTRY[entry]/plot/DATA[{key}]/@units"] = "degrees"
+ # MK:: Carola, Ron, Flo, Tamas, Sandor refactor the following line
+ # using a proper unit parsing logic
+ template[f"/ENTRY[entry]/plot/DATA[{key}]/@units"] = "degree"
if dindx == 0 and index == 0:
template[f"/ENTRY[entry]/plot/DATA[{key}]/@long_name"] = \
- f"{plot_name} (degrees)"
+ f"{plot_name} (degree)"
template[f"/ENTRY[entry]/plot/DATA[{key}_errors]"] = \
{
"link": "/entry/data_collection/data_error",
"shape": np.index_exp[index, dindx, :]
}
- template[f"/ENTRY[entry]/plot/DATA[{key}_errors]/@units"] = "degrees"
+ # MK:: Carola, Ron, Flo, Tamas, Sandor refactor the following line
+ template[f"/ENTRY[entry]/plot/DATA[{key}_errors]/@units"] = "degree"
# Define default plot showing Psi and Delta at all angles:
template["/@default"] = "entry"
@@ -455,6 +467,16 @@ def read(self,
for index in range(1, len(data_list)):
template["/ENTRY[entry]/plot/@auxiliary_signals"] += data_list[index]
+ template["/ENTRY[entry]/definition"] = "NXellipsometry"
+ template["/ENTRY[entry]/definition/@url"] = (
+ "https://github.com/FAIRmat-NFDI/nexus_definitions/"
+ f"blob/{get_nexus_version_hash()}/contributed_definitions/NXellipsometry.nxdl.xml"
+ )
+ template["/ENTRY[entry]/definition/@version"] = get_nexus_version()
+ template["/ENTRY[entry]/program_name"] = "pynxtools"
+ template["/ENTRY[entry]/program_name/@version"] = version("pynxtools")
+ template["/ENTRY[entry]/program_name/@url"] = "https://github.com/FAIRmat-NFDI/pynxtools"
+
return template
diff --git a/pynxtools/dataconverter/readers/em_nion/concepts/README.md b/pynxtools/dataconverter/readers/em_nion/map_concepts/README.md
similarity index 100%
rename from pynxtools/dataconverter/readers/em_nion/concepts/README.md
rename to pynxtools/dataconverter/readers/em_nion/map_concepts/README.md
diff --git a/pynxtools/dataconverter/readers/em_nion/concepts/swift_display_items_to_nx_concepts.py b/pynxtools/dataconverter/readers/em_nion/map_concepts/swift_display_items_to_nx.py
similarity index 100%
rename from pynxtools/dataconverter/readers/em_nion/concepts/swift_display_items_to_nx_concepts.py
rename to pynxtools/dataconverter/readers/em_nion/map_concepts/swift_display_items_to_nx.py
diff --git a/pynxtools/dataconverter/readers/em_nion/concepts/generic_eln_mapping.py b/pynxtools/dataconverter/readers/em_nion/map_concepts/swift_eln_to_nx_map.py
similarity index 100%
rename from pynxtools/dataconverter/readers/em_nion/concepts/generic_eln_mapping.py
rename to pynxtools/dataconverter/readers/em_nion/map_concepts/swift_eln_to_nx_map.py
diff --git a/pynxtools/dataconverter/readers/em_nion/concepts/nx_image_ang_space.py b/pynxtools/dataconverter/readers/em_nion/map_concepts/swift_to_nx_image_ang_space.py
similarity index 100%
rename from pynxtools/dataconverter/readers/em_nion/concepts/nx_image_ang_space.py
rename to pynxtools/dataconverter/readers/em_nion/map_concepts/swift_to_nx_image_ang_space.py
diff --git a/pynxtools/dataconverter/readers/em_nion/concepts/nx_image_real_space.py b/pynxtools/dataconverter/readers/em_nion/map_concepts/swift_to_nx_image_real_space.py
similarity index 100%
rename from pynxtools/dataconverter/readers/em_nion/concepts/nx_image_real_space.py
rename to pynxtools/dataconverter/readers/em_nion/map_concepts/swift_to_nx_image_real_space.py
diff --git a/pynxtools/dataconverter/readers/em_nion/concepts/nx_spectrum_eels.py b/pynxtools/dataconverter/readers/em_nion/map_concepts/swift_to_nx_spectrum_eels.py
similarity index 100%
rename from pynxtools/dataconverter/readers/em_nion/concepts/nx_spectrum_eels.py
rename to pynxtools/dataconverter/readers/em_nion/map_concepts/swift_to_nx_spectrum_eels.py
diff --git a/pynxtools/dataconverter/readers/em_nion/reader.py b/pynxtools/dataconverter/readers/em_nion/reader.py
index ac785fda3..e226aca91 100644
--- a/pynxtools/dataconverter/readers/em_nion/reader.py
+++ b/pynxtools/dataconverter/readers/em_nion/reader.py
@@ -23,10 +23,10 @@
from pynxtools.dataconverter.readers.base.reader import BaseReader
-from pynxtools.dataconverter.readers.em_nion.utils.use_case_selector \
+from pynxtools.dataconverter.readers.em_nion.utils.swift_define_io_cases \
import EmNionUseCaseSelector
-from pynxtools.dataconverter.readers.em_nion.utils.em_generic_eln_io \
+from pynxtools.dataconverter.readers.em_nion.utils.swift_load_generic_eln \
import NxEmNionElnSchemaParser
from pynxtools.dataconverter.readers.em_nion.utils.swift_zipped_project_parser \
diff --git a/pynxtools/dataconverter/readers/em_nion/utils/versioning.py b/pynxtools/dataconverter/readers/em_nion/utils/em_nion_versioning.py
similarity index 100%
rename from pynxtools/dataconverter/readers/em_nion/utils/versioning.py
rename to pynxtools/dataconverter/readers/em_nion/utils/em_nion_versioning.py
diff --git a/pynxtools/dataconverter/readers/em_nion/utils/use_case_selector.py b/pynxtools/dataconverter/readers/em_nion/utils/swift_define_io_cases.py
similarity index 100%
rename from pynxtools/dataconverter/readers/em_nion/utils/use_case_selector.py
rename to pynxtools/dataconverter/readers/em_nion/utils/swift_define_io_cases.py
diff --git a/pynxtools/dataconverter/readers/em_nion/utils/swift_dimscale_axes.py b/pynxtools/dataconverter/readers/em_nion/utils/swift_generate_dimscale_axes.py
similarity index 96%
rename from pynxtools/dataconverter/readers/em_nion/utils/swift_dimscale_axes.py
rename to pynxtools/dataconverter/readers/em_nion/utils/swift_generate_dimscale_axes.py
index cdc15e895..fbd9cfcf2 100644
--- a/pynxtools/dataconverter/readers/em_nion/utils/swift_dimscale_axes.py
+++ b/pynxtools/dataconverter/readers/em_nion/utils/swift_generate_dimscale_axes.py
@@ -23,7 +23,7 @@
import numpy as np
-from pynxtools.dataconverter.readers.em_nion.concepts.swift_display_items_to_nx_concepts \
+from pynxtools.dataconverter.readers.em_nion.map_concepts.swift_display_items_to_nx \
import metadata_constraints, check_existence_of_required_fields # nexus_concept_dict
diff --git a/pynxtools/dataconverter/readers/em_nion/utils/em_generic_eln_io.py b/pynxtools/dataconverter/readers/em_nion/utils/swift_load_generic_eln.py
similarity index 95%
rename from pynxtools/dataconverter/readers/em_nion/utils/em_generic_eln_io.py
rename to pynxtools/dataconverter/readers/em_nion/utils/swift_load_generic_eln.py
index 8be648477..4028e4986 100644
--- a/pynxtools/dataconverter/readers/em_nion/utils/em_generic_eln_io.py
+++ b/pynxtools/dataconverter/readers/em_nion/utils/swift_load_generic_eln.py
@@ -27,16 +27,16 @@
from ase.data import chemical_symbols
-from pynxtools.dataconverter.readers.em_nion.utils.versioning \
+from pynxtools.dataconverter.readers.em_nion.utils.em_nion_versioning \
import NX_EM_NION_ADEF_NAME, NX_EM_NION_ADEF_VERSION
-from pynxtools.dataconverter.readers.em_nion.utils.versioning \
+from pynxtools.dataconverter.readers.em_nion.utils.em_nion_versioning \
import NX_EM_NION_EXEC_NAME, NX_EM_NION_EXEC_VERSION
-from pynxtools.dataconverter.readers.em_nion.concepts.swift_handle_nx_concepts \
+from pynxtools.dataconverter.readers.shared.map_concepts.mapping_functors \
import apply_modifier, variadic_path_to_specific_path
-from pynxtools.dataconverter.readers.em_nion.concepts.generic_eln_mapping \
+from pynxtools.dataconverter.readers.em_nion.map_concepts.swift_eln_to_nx_map \
import NxEmElnInput, NxUserFromListOfDict, NxDetectorListOfDict, NxSample
diff --git a/pynxtools/dataconverter/readers/em_nion/utils/swift_zipped_project_parser.py b/pynxtools/dataconverter/readers/em_nion/utils/swift_zipped_project_parser.py
index f72f7d48c..17f74ba61 100644
--- a/pynxtools/dataconverter/readers/em_nion/utils/swift_zipped_project_parser.py
+++ b/pynxtools/dataconverter/readers/em_nion/utils/swift_zipped_project_parser.py
@@ -38,21 +38,21 @@
from pynxtools.dataconverter.readers.em_nion.utils.swift_uuid_to_file_name \
import uuid_to_file_name
-from pynxtools.dataconverter.readers.em_nion.utils.swift_dimscale_axes \
+from pynxtools.dataconverter.readers.em_nion.utils.swift_generate_dimscale_axes \
import get_list_of_dimension_scale_axes
-from pynxtools.dataconverter.readers.em_nion.concepts.swift_display_items_to_nx_concepts \
+from pynxtools.dataconverter.readers.em_nion.map_concepts.swift_display_items_to_nx \
import nexus_concept_dict, identify_nexus_concept_key
-from pynxtools.dataconverter.readers.em_nion.concepts.swift_handle_nx_concepts \
+from pynxtools.dataconverter.readers.shared.map_concepts.mapping_functors \
import apply_modifier, variadic_path_to_specific_path
-from pynxtools.dataconverter.readers.em_nion.concepts.nx_image_real_space \
+from pynxtools.dataconverter.readers.em_nion.map_concepts.swift_to_nx_image_real_space \
import NxImageRealSpaceDict
-from pynxtools.dataconverter.readers.em_nion.utils.versioning \
+from pynxtools.dataconverter.readers.em_nion.utils.em_nion_versioning \
import NX_EM_NION_SWIFT_NAME, NX_EM_NION_SWIFT_VERSION
-from pynxtools.dataconverter.readers.em_nion.utils.versioning \
+from pynxtools.dataconverter.readers.em_nion.utils.em_nion_versioning \
import NX_EM_NION_EXEC_NAME, NX_EM_NION_EXEC_VERSION
diff --git a/pynxtools/dataconverter/readers/em_om/utils/image_transform.py b/pynxtools/dataconverter/readers/em_om/utils/image_transform.py
index 34f98266f..7369ebef8 100644
--- a/pynxtools/dataconverter/readers/em_om/utils/image_transform.py
+++ b/pynxtools/dataconverter/readers/em_om/utils/image_transform.py
@@ -23,7 +23,6 @@
# f" how-do-i-make-pil-take-into-account-the-shortest-side-when-creating-a-thumbnail"
import numpy as np
-from PIL import Image as pil
def thumbnail(img, size=300):
@@ -39,16 +38,14 @@ def thumbnail(img, size=300):
return img
if old_width == old_height:
- img.thumbnail((size, size), pil.ANTIALIAS)
-
+ img.thumbnail((size, size))
elif old_height > old_width:
ratio = float(old_width) / float(old_height)
new_width = ratio * size
- img = img.resize((int(np.floor(new_width)), size), pil.ANTIALIAS)
-
+ img = img.resize((int(np.floor(new_width)), size))
elif old_width > old_height:
ratio = float(old_height) / float(old_width)
new_height = ratio * size
- img = img.resize((size, int(np.floor(new_height))), pil.ANTIALIAS)
+ img = img.resize((size, int(np.floor(new_height))))
return img
diff --git a/pynxtools/dataconverter/readers/example/reader.py b/pynxtools/dataconverter/readers/example/reader.py
index 81b31b6de..83e7438b0 100644
--- a/pynxtools/dataconverter/readers/example/reader.py
+++ b/pynxtools/dataconverter/readers/example/reader.py
@@ -52,7 +52,8 @@ def read(self,
for k in template.keys():
# The entries in the template dict should correspond with what the dataconverter
# outputs with --generate-template for a provided NXDL file
- if k.startswith("/ENTRY[entry]/required_group"):
+ if k.startswith("/ENTRY[entry]/required_group") \
+ or k == "/ENTRY[entry]/optional_parent/req_group_in_opt_group":
continue
field_name = k[k.rfind("/") + 1:]
@@ -61,6 +62,10 @@ def read(self,
if f"{field_name}_units" in data.keys() and f"{k}/@units" in template.keys():
template[f"{k}/@units"] = data[f"{field_name}_units"]
+ template["required"]["/ENTRY[entry]/optional_parent/required_child"] = 1
+ template["optional"][("/ENTRY[entry]/optional_parent/"
+ "req_group_in_opt_group/DATA[data]")] = [0, 1]
+
# Add non template key
template["/ENTRY[entry]/does/not/exist"] = "None"
template["/ENTRY[entry]/required_group/description"] = "A test description"
diff --git a/pynxtools/dataconverter/readers/json_map/README.md b/pynxtools/dataconverter/readers/json_map/README.md
index 4b4820c49..b81aec969 100644
--- a/pynxtools/dataconverter/readers/json_map/README.md
+++ b/pynxtools/dataconverter/readers/json_map/README.md
@@ -1,24 +1,63 @@
# JSON Map Reader
-This reader allows you to convert either data from a .json file or an xarray exported as a .pickle using a flat .mapping.json file.
+## What is this reader?
+
+This reader is designed to allow users of pynxtools to convert their existing data with the help of a map file. The map file tells the reader what to pick from your data files and convert them to FAIR NeXus files. The following formats are supported as input files:
+* HDF5 (any extension works i.e. h5, hdf5, nxs, etc)
+* JSON
+* Python Dict Objects Pickled with [pickle](https://docs.python.org/3/library/pickle.html). These can contain [xarray.DataArray](https://docs.xarray.dev/en/stable/generated/xarray.DataArray.html) objects as well as regular Python types and Numpy types.
It accepts any NXDL file that you like as long as your mapping file contains all the fields.
Please use the --generate-template function of the dataconverter to create a .mapping.json file.
```console
-user@box:~$ python convert.py --nxdl NXmynxdl --generate-template > mynxdl.mapping.json
+user@box:~$ dataconverter --nxdl NXmynxdl --generate-template > mynxdl.mapping.json
```
There are some example files you can use:
+[data.mapping.json](/tests/data/dataconverter/readers/json_map/data.mapping.json)
-[data.mapping.json](/tests/data/tools/dataconverter/readers/json_map/data.mapping.json)
-
-[data.json](/tests/data/tools/dataconverter/readers/json_map/data.json)
+[data.json](/tests/data/dataconverter/readers/json_map/data.json)
```console
-user@box:~$ python convert.py --nxdl NXtest --input-file data.json --input-file data.mapping.json --reader json_map
+user@box:~$ dataconverter --nxdl NXtest --input-file data.json --mapping data.mapping.json
+```
+
+##### [Example](/examples/json_map/) with HDF5 files.
+
+## The mapping.json file
+
+This file is designed to let you fill in the requirements of a NeXus Application Definition without writing any code. If you already have data in the formats listed above, you just need to use this mapping file to help the dataconverter pick your data correctly.
+
+The mapping files will always be based on the Template the dataconverter generates. See above on how to generate a mapping file.
+The right hand side values of the Template keys are what you can modify.
+
+Here are the three different ways you can fill the right hand side of the Template keys:
+* Write the nested path in your datafile. This is indicated by a leading `/` before the word `entry` to make `/entry/data/current_295C` below.
+Example:
+
+```json
+ "/ENTRY[entry]/DATA[data]/current_295C": "/entry/data/current_295C",
+ "/ENTRY[entry]/NXODD_name/posint_value": "/a_level_down/another_level_down/posint_value",
+```
+
+* Write the values directly in the mapping file for missing data from your data file.
+
+```json
+
+ "/ENTRY[entry]/PROCESS[process]/program": "Bluesky",
+ "/ENTRY[entry]/PROCESS[process]/program/@version": "1.6.7"
+```
+
+* Write JSON objects with a link key. This follows the same link mechanism that the dataconverter implements. In the context of this reader, you can only use external links to your data files. In the example below, `current.nxs` is an already existing HDF5 file that we link to in our new NeXus file without copying over the data. The format is as follows:
+`"link": ":"`
+Note: This only works for HDF5 files currently.
+
+```json
+ "/ENTRY[entry]/DATA[data]/current_295C": {"link": "current.nxs:/entry/data/current_295C"},
+ "/ENTRY[entry]/DATA[data]/current_300C": {"link": "current.nxs:/entry/data/current_300C"},
```
## Contact person in FAIRmat for this reader
-Sherjeel Shabih
\ No newline at end of file
+Sherjeel Shabih
diff --git a/pynxtools/dataconverter/readers/json_map/reader.py b/pynxtools/dataconverter/readers/json_map/reader.py
index 25123dc94..d17bb075b 100644
--- a/pynxtools/dataconverter/readers/json_map/reader.py
+++ b/pynxtools/dataconverter/readers/json_map/reader.py
@@ -21,10 +21,10 @@
import pickle
import numpy as np
import xarray
+from mergedeep import merge
from pynxtools.dataconverter.readers.base.reader import BaseReader
from pynxtools.dataconverter.template import Template
-from pynxtools.dataconverter.helpers import ensure_all_required_fields_exist
from pynxtools.dataconverter import hdfdict
@@ -58,9 +58,26 @@ def get_val_nested_keystring_from_dict(keystring, data):
return data[current_key]
+def get_attrib_nested_keystring_from_dict(keystring, data):
+ """
+ Fetches all attributes from the data dict using path strings without a leading '/':
+ 'path/to/data/in/dict'
+ """
+ if isinstance(keystring, (list, dict)):
+ return keystring
+
+ key_splits = keystring.split("/")
+ parents = key_splits[:-1]
+ target = key_splits[-1]
+ for key in parents:
+ data = data[key]
+
+ return data[target + "@"] if target + "@" in data.keys() else None
+
+
def is_path(keystring):
"""Checks whether a given value in the mapping is a mapping path or just data"""
- return isinstance(keystring, str) and keystring[0] == "/"
+ return isinstance(keystring, str) and len(keystring) > 0 and keystring[0] == "/"
def fill_undocumented(mapping, template, data):
@@ -69,6 +86,7 @@ def fill_undocumented(mapping, template, data):
if is_path(value):
template["undocumented"][path] = get_val_nested_keystring_from_dict(value[1:],
data)
+ fill_attributes(path, value[1:], data, template)
else:
template["undocumented"][path] = value
@@ -82,6 +100,7 @@ def fill_documented(template, mapping, template_provided, data):
if is_path(map_str):
template[path] = get_val_nested_keystring_from_dict(map_str[1:],
data)
+ fill_attributes(path, map_str[1:], data, template)
else:
template[path] = map_str
@@ -90,6 +109,14 @@ def fill_documented(template, mapping, template_provided, data):
pass
+def fill_attributes(path, map_str, data, template):
+ """Fills in the template all attributes found in the data object"""
+ attribs = get_attrib_nested_keystring_from_dict(map_str, data)
+ if attribs:
+ for key, value in attribs.items():
+ template[path + "/@" + key] = value
+
+
def convert_shapes_to_slice_objects(mapping):
"""Converts shape slice strings to slice objects for indexing"""
for key in mapping:
@@ -98,6 +125,25 @@ def convert_shapes_to_slice_objects(mapping):
mapping[key]["shape"] = parse_slice(mapping[key]["shape"])
+def get_map_from_partials(partials, template, data):
+ """Takes a list of partials and returns a mapping dictionary to fill partials in our template"""
+ mapping: dict = {}
+ for partial in partials:
+ path = ""
+ template_path = ""
+ for part in partial.split("/")[1:]:
+ path = path + "/" + part
+ attribs = get_attrib_nested_keystring_from_dict(path[1:], data)
+ if template_path + "/" + part in template.keys():
+ template_path = template_path + "/" + part
+ else:
+ nx_name = f"{attribs['NX_class'][2:].upper()}[{part}]" if attribs and "NX_class" in attribs else part # pylint: disable=line-too-long
+ template_path = template_path + "/" + nx_name
+ mapping[template_path] = path
+
+ return mapping
+
+
class JsonMapReader(BaseReader):
"""A reader that takes a mapping json file and a data file/object to return a template."""
@@ -119,10 +165,10 @@ def read(self,
The mapping is only accepted as file.mapping.json to the inputs.
"""
data: dict = {}
- mapping: dict = {}
+ mapping: dict = None
+ partials: list = []
- if objects:
- data = objects[0]
+ data = objects[0] if objects else data
for file_path in file_paths:
file_extension = file_path[file_path.rindex("."):]
@@ -143,23 +189,26 @@ def read(self,
if is_hdf5:
hdf = hdfdict.load(file_path)
hdf.unlazy()
- data = dict(hdf)
+ merge(data, dict(hdf))
+ if "entry@" in data and "partial" in data["entry@"]:
+ partials.extend(data["entry@"]["partial"])
if mapping is None:
- template = Template({x: "/hierarchical/path/in/your/datafile" for x in template})
- raise IOError("Please supply a JSON mapping file: --input-file"
- " my_nxdl_map.mapping.json\n\n You can use this "
- "template for the required fields: \n" + str(template))
+ if len(partials) > 0:
+ mapping = get_map_from_partials(partials, template, data)
+ else:
+ template = Template({x: "/hierarchical/path/in/your/datafile" for x in template})
+ raise IOError("Please supply a JSON mapping file: --input-file"
+ " my_nxdl_map.mapping.json\n\n You can use this "
+ "template for the required fields: \n" + str(template))
+ new_template = Template()
convert_shapes_to_slice_objects(mapping)
- new_template = Template()
fill_documented(new_template, mapping, template, data)
fill_undocumented(mapping, new_template, data)
- ensure_all_required_fields_exist(template, new_template)
-
return new_template
diff --git a/pynxtools/dataconverter/readers/mpes/reader.py b/pynxtools/dataconverter/readers/mpes/reader.py
index fce988f76..7d860765c 100644
--- a/pynxtools/dataconverter/readers/mpes/reader.py
+++ b/pynxtools/dataconverter/readers/mpes/reader.py
@@ -198,20 +198,6 @@ def handle_h5_and_json_file(file_paths, objects):
f"but {file_path} does not match.",
)
- if not os.path.exists(file_path):
- file_path = os.path.join(
- os.path.dirname(__file__),
- "..",
- "..",
- "..",
- "..",
- "tests",
- "data",
- "dataconverter",
- "readers",
- "mpes",
- file_path,
- )
if not os.path.exists(file_path):
raise FileNotFoundError(
errno.ENOENT,
@@ -252,11 +238,30 @@ def _getattr(obj, attr):
if "index" in attr:
axis = attr.split(".")[0]
- return str(obj.dims.index(f"{axis}"))
+ return obj.dims.index(f"{axis}")
return reduce(_getattr, [obj] + attr.split("."))
+def fill_data_indices_in_config(config_file_dict, x_array_loaded):
+ """Add data indices key value pairs to the config_file
+ dictionary from the xarray dimensions if not already
+ present.
+ """
+ for key in list(config_file_dict):
+ if "*" in key:
+ value = config_file_dict[key]
+ for dim in x_array_loaded.dims:
+ new_key = key.replace("*", dim)
+ new_value = value.replace("*", dim)
+
+ if new_key not in config_file_dict.keys() \
+ and new_value not in config_file_dict.values():
+ config_file_dict[new_key] = new_value
+
+ config_file_dict.pop(key)
+
+
class MPESReader(BaseReader):
"""MPES-specific reader class"""
@@ -265,7 +270,7 @@ class MPESReader(BaseReader):
# Whitelist for the NXDLs that the reader supports and can process
supported_nxdls = ["NXmpes"]
- def read(
+ def read( # pylint: disable=too-many-branches
self,
template: dict = None,
file_paths: Tuple[str] = None,
@@ -283,6 +288,8 @@ def read(
eln_data_dict,
) = handle_h5_and_json_file(file_paths, objects)
+ fill_data_indices_in_config(config_file_dict, x_array_loaded)
+
for key, value in config_file_dict.items():
if isinstance(value, str) and ":" in value:
diff --git a/pynxtools/dataconverter/readers/rii_database/reader.py b/pynxtools/dataconverter/readers/rii_database/reader.py
index ae36b3884..32fb7c5fa 100644
--- a/pynxtools/dataconverter/readers/rii_database/reader.py
+++ b/pynxtools/dataconverter/readers/rii_database/reader.py
@@ -17,13 +17,12 @@
#
"""Convert refractiveindex.info yaml files to nexus"""
from typing import Tuple, Any, Dict
-import logging
from pynxtools.dataconverter.readers.json_yml.reader import YamlJsonReader
from pynxtools.dataconverter.readers.rii_database.dispersion_reader import (
DispersionReader,
)
-from pynxtools.dataconverter.readers.utils import parse_json
+from pynxtools.dataconverter.readers.utils import parse_json, handle_objects
class RiiReader(YamlJsonReader):
@@ -40,7 +39,7 @@ def __init__(self, *args, **kwargs):
".yaml": self.read_dispersion,
".json": self.parse_json_w_fileinfo,
"default": lambda _: self.appdef_defaults(),
- "objects": self.handle_objects,
+ "objects": self.handle_rii_objects,
}
def read_dispersion(self, filename: str):
@@ -86,20 +85,9 @@ def parse_json_w_fileinfo(self, filename: str) -> Dict[str, Any]:
return template
- def handle_objects(self, objects: Tuple[Any]) -> Dict[str, Any]:
+ def handle_rii_objects(self, objects: Tuple[Any]) -> Dict[str, Any]:
"""Handle objects and generate template entries from them"""
- if objects is None:
- return {}
-
- template = {}
-
- for obj in objects:
- if not isinstance(obj, dict):
- logging.warning("Ignoring unknown object of type %s", type(obj))
- continue
-
- template.update(obj)
-
+ template = handle_objects(objects)
self.fill_dispersion_in(template)
return template
diff --git a/pynxtools/dataconverter/readers/em_nion/concepts/swift_handle_nx_concepts.py b/pynxtools/dataconverter/readers/shared/map_concepts/mapping_functors.py
similarity index 100%
rename from pynxtools/dataconverter/readers/em_nion/concepts/swift_handle_nx_concepts.py
rename to pynxtools/dataconverter/readers/shared/map_concepts/mapping_functors.py
diff --git a/pynxtools/dataconverter/readers/shared/shared_utils.py b/pynxtools/dataconverter/readers/shared/shared_utils.py
index 59d28ba6d..629e29a0f 100644
--- a/pynxtools/dataconverter/readers/shared/shared_utils.py
+++ b/pynxtools/dataconverter/readers/shared/shared_utils.py
@@ -22,15 +22,17 @@
# pylint: disable=E1101, R0801
-import git
+# import git
def get_repo_last_commit() -> str:
"""Identify the last commit to the repository."""
- repo = git.Repo(search_parent_directories=True)
- sha = str(repo.head.object.hexsha)
- if sha != "":
- return sha
+ # repo = git.Repo(search_parent_directories=True)
+ # sha = str(repo.head.object.hexsha)
+ # if sha != "":
+ # return sha
+ # currently update-north-markus branch on nomad-FAIR does not pick up
+ # git even though git in the base image and gitpython in pynxtools deps
return "unknown git commit id or unable to parse git reverse head"
diff --git a/pynxtools/dataconverter/readers/transmission/reader.py b/pynxtools/dataconverter/readers/transmission/reader.py
index 3d4f0e152..ccc94374e 100644
--- a/pynxtools/dataconverter/readers/transmission/reader.py
+++ b/pynxtools/dataconverter/readers/transmission/reader.py
@@ -22,7 +22,7 @@
from pynxtools.dataconverter.readers.json_yml.reader import YamlJsonReader
import pynxtools.dataconverter.readers.transmission.metadata_parsers as mpars
-from pynxtools.dataconverter.readers.utils import parse_json, parse_yml
+from pynxtools.dataconverter.readers.utils import parse_json, parse_yml, handle_objects
# Dictionary mapping metadata in the asc file to the paths in the NeXus file.
@@ -254,6 +254,7 @@ class TransmissionReader(YamlJsonReader):
".yml": lambda fname: parse_yml(fname, CONVERT_DICT, REPLACE_NESTED),
".yaml": lambda fname: parse_yml(fname, CONVERT_DICT, REPLACE_NESTED),
"default": lambda _: add_def_info(),
+ "objects": handle_objects,
}
diff --git a/pynxtools/dataconverter/readers/utils.py b/pynxtools/dataconverter/readers/utils.py
index 23fbfbdd9..c1826d744 100644
--- a/pynxtools/dataconverter/readers/utils.py
+++ b/pynxtools/dataconverter/readers/utils.py
@@ -16,12 +16,15 @@
# limitations under the License.
#
"""Utility functions for the NeXus reader classes."""
+import logging
from dataclasses import dataclass, replace
-from typing import List, Any, Dict, Optional
+from typing import List, Any, Dict, Optional, Tuple
from collections.abc import Mapping
import json
import yaml
+logger = logging.getLogger(__name__)
+
@dataclass
class FlattenSettings():
@@ -201,3 +204,20 @@ def parse_json(file_path: str) -> Dict[str, Any]:
"""
with open(file_path, "r", encoding="utf-8") as file:
return json.load(file)
+
+
+def handle_objects(objects: Tuple[Any]) -> Dict[str, Any]:
+ """Handle objects and generate template entries from them"""
+ if objects is None:
+ return {}
+
+ template = {}
+
+ for obj in objects:
+ if not isinstance(obj, dict):
+ logger.warning("Ignoring unknown object of type %s", type(obj))
+ continue
+
+ template.update(obj)
+
+ return template
diff --git a/pynxtools/dataconverter/readers/xrd/README.md b/pynxtools/dataconverter/readers/xrd/README.md
new file mode 100644
index 000000000..53c64dfc7
--- /dev/null
+++ b/pynxtools/dataconverter/readers/xrd/README.md
@@ -0,0 +1,40 @@
+# XRD Reader
+With the XRD reader, data from X-ray diffraction experiment can be read and written into a NeXus file (h5 type file with extension .nxs) according to NXxrd_pan application definition in [NeXus](https://github.com/FAIRmat-NFDI/nexus_definitions). There are a few different methods of measuring XRD: 1. θ:2θ instruments (e.g. Rigaku H3R), and 2. θ:θ instrument (e.g. PANalytical X’Pert Pro). The goal with this reader is to support both of these methods.
+
+**NOTE: This reader is still under development. As of now, the reader can only handle files with the extension `.xrdml` , obtained with PANalytical X’Pert Pro version 1.5 (method 2 described above). Currently we are wtoking to include more file types and file versions.**
+
+## Contact Person in FAIRmat
+In principle, you can reach out to any member of Area B of the FAIRmat consortium, but Rubel Mozumder could be more reasonable for the early response.
+
+## Parsers
+Though, in computer science, parser is a process that reads code into smaller parts (called tocken) with relations among tockens in a tree diagram. The process helps compiler to understand the tocken relationship of the source code.
+
+The XRD reader calls a program or class (called parser) that reads the experimenal input file and re-organises the different physical/experiment concepts or properties in a certain structure which is defined by developer.
+
+### class pynxtools.dataconverter.readers.xrd.xrd_parser.XRDMLParser
+
+ **inputs:**
+ file_path: Full path of the input file.
+
+ **Important method:**
+ get_slash_separated_xrd_dict() -> dict
+
+ This method can be used to check if all the data from the input file have been read or not, it returns the slash separated dict as described.
+
+
+### Other Parsers
+ **Coming Soon!!**
+
+### How To
+The reader can be run from Jupyter-notebook or Jupyter-lab with the following command:
+
+```sh
+ ! dataconverter \
+--reader xrd \
+--nxdl NXxrd_pan \
+--input-file $ \
+--input-file $ \
+--output .nxs
+```
+
+An example file can be found here in GitLab in [nomad-remote-tools-hub](https://gitlab.mpcdf.mpg.de/nomad-lab/nomad-remote-tools-hub/-/tree/develop/docker/xrd) feel free to vist and try out the reader.
diff --git a/pynxtools/dataconverter/readers/xrd/__init__.py b/pynxtools/dataconverter/readers/xrd/__init__.py
new file mode 100644
index 000000000..d4ec4a8cc
--- /dev/null
+++ b/pynxtools/dataconverter/readers/xrd/__init__.py
@@ -0,0 +1,15 @@
+# Copyright The NOMAD Authors.
+#
+# This file is part of NOMAD. See https://nomad-lab.eu for further info.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
diff --git a/pynxtools/dataconverter/readers/xrd/config.py b/pynxtools/dataconverter/readers/xrd/config.py
new file mode 100644
index 000000000..4d3757b10
--- /dev/null
+++ b/pynxtools/dataconverter/readers/xrd/config.py
@@ -0,0 +1,117 @@
+"""This is config file that mainly maps nexus definition to data path in raw file."""
+
+# pylint: disable=C0301
+xrdml = {
+ "/ENTRY[entry]/2theta_plot/chi": {"xrdml_1.5": {"value": "",
+ "@units": "",
+ "@chi_indices": 0},
+ },
+ "/ENTRY[entry]/2theta_plot/intensity": {"xrdml_1.5": {"value": "/xrdMeasurements/xrdMeasurement/scan/dataPoints/intensities",
+ "@units": "counts/s"}
+ },
+ "/ENTRY[entry]/2theta_plot/omega": {"xrdml_1.5": {"value": "",
+ "@units": "",
+ "@omega_indices": 1},
+ },
+ "/ENTRY[entry]/2theta_plot/title": "Intensity Vs. Two Theta (deg.)",
+ "/ENTRY[entry]/2theta_plot/phi": {"xrdml_1.5": {"value": "",
+ "@units": "",
+ "@phi_indices": 0},
+ },
+ "/ENTRY[entry]/2theta_plot/two_theta": {"xrdml_1.5": {"value": "",
+ "@units": "deg",
+ "@two_theta_indices": 0},
+ },
+ "/ENTRY[entry]/COLLECTION[collection]/beam_attenuation_factors": {"xrdml_1.5": {"value": "/beamAttenuationFactors",
+ "@units": ""},
+ },
+ "/ENTRY[entry]/COLLECTION[collection]/omega/start": {"xrdml_1.5": {"value": "/xrdMeasurements/xrdMeasurement/scan/dataPoints/positions_2/startPosition",
+ "@units": "/xrdMeasurements/xrdMeasurement/scan/dataPoints/positions_2/unit"},
+ },
+ "/ENTRY[entry]/COLLECTION[collection]/omega/end": {"xrdml_1.5": {"value": "/xrdMeasurements/xrdMeasurement/scan/dataPoints/positions_2/endPosition",
+ "@units": "/xrdMeasurements/xrdMeasurement/scan/dataPoints/positions_2/unit"},
+ },
+ "/ENTRY[entry]/COLLECTION[collection]/omega/step": {"xrdml_1.5": {"value": "/xrdMeasurements/comment/entry_2/MinimumstepsizeOmega",
+ "@units": "/xrdMeasurements/xrdMeasurement/scan/dataPoints/positions_2/unit"},
+ },
+ "/ENTRY[entry]/COLLECTION[collection]/2theta/start": {"xrdml_1.5": {"value": "/xrdMeasurements/xrdMeasurement/scan/dataPoints/positions_1/startPosition",
+ "@units": "/xrdMeasurements/xrdMeasurement/scan/dataPoints/positions_1/unit"},
+ },
+ "/ENTRY[entry]/COLLECTION[collection]/2theta/end": {"xrdml_1.5": {"value": "/xrdMeasurements/xrdMeasurement/scan/dataPoints/positions_1/endPosition",
+ "@units": "/xrdMeasurements/xrdMeasurement/scan/dataPoints/positions_1/unit"},
+ },
+ "/ENTRY[entry]/COLLECTION[collection]/2theta/step": {"xrdml_1.5": {"value": "/xrdMeasurements/comment/entry_2/Minimumstepsize2Theta",
+ "@units": "/xrdMeasurements/xrdMeasurement/scan/dataPoints/positions_1/unit"},
+ },
+ "/ENTRY[entry]/COLLECTION[collection]/count_time": {"xrdml_1.5": {"value": "/xrdMeasurements/xrdMeasurement/scan/dataPoints/commonCountingTime",
+ "@units": "/xrdMeasurements/xrdMeasurement/scan/dataPoints/commonCountingTime/unit"},
+ },
+ "/ENTRY[entry]/COLLECTION[collection]/data_file": {"xrdml_1.5": {"value": ""}
+ },
+ "/ENTRY[entry]/COLLECTION[collection]/goniometer_x": {"xrdml_1.5": {"value": "/X",
+ "@units": ""},
+ },
+ "/ENTRY[entry]/COLLECTION[collection]/goniometer_y": {"xrdml_1.5": {"value": "/Y",
+ "@units": ""},
+ },
+ "/ENTRY[entry]/COLLECTION[collection]/goniometer_z": {"xrdml_1.5": {"value": "/Z",
+ "@units": ""},
+ },
+ "/ENTRY[entry]/COLLECTION[collection]/measurement_type": {"xrdml_1.5": {"value": "/xrdMeasurements/xrdMeasurement/measurementType",
+ "@units": ""},
+ },
+ "/ENTRY[entry]/INSTRUMENT[instrument]/DETECTOR[detector]/integration_time": {"xrdml_1.5": {"value": "",
+ "@units": ""},
+ },
+ "/ENTRY[entry]/INSTRUMENT[instrument]/DETECTOR[detector]/integration_time/@units": {"xrdml_1.5": {"value": "",
+ "@units": ""},
+ },
+ "/ENTRY[entry]/INSTRUMENT[instrument]/DETECTOR[detector]/scan_axis": {"xrdml_1.5": {"value": "/xrdMeasurements/xrdMeasurement/scan/scanAxis",
+ "@units": ""},
+ },
+ "/ENTRY[entry]/INSTRUMENT[instrument]/DETECTOR[detector]/scan_mode": {"xrdml_1.5": {"value": "/xrdMeasurements/xrdMeasurement/scan/mode",
+ "@units": ""},
+ },
+ "/ENTRY[entry]/INSTRUMENT[instrument]/SOURCE[source]/k_alpha_one": {"xrdml_1.5": {"value": "/xrdMeasurements/xrdMeasurement/usedWavelength/kAlpha1",
+ "@units": "/xrdMeasurements/xrdMeasurement/usedWavelength/kAlpha1/unit"},
+ },
+ "/ENTRY[entry]/INSTRUMENT[instrument]/SOURCE[source]/k_alpha_two": {"xrdml_1.5": {"value": "/xrdMeasurements/xrdMeasurement/usedWavelength/kAlpha2",
+ "@units": "/xrdMeasurements/xrdMeasurement/usedWavelength/kAlpha2/unit"},
+ },
+ "/ENTRY[entry]/INSTRUMENT[instrument]/SOURCE[source]/kbeta": {"xrdml_1.5": {"value": "/xrdMeasurements/xrdMeasurement/usedWavelength/kBeta",
+ "@units": "/xrdMeasurements/xrdMeasurement/usedWavelength/kBeta/unit"},
+ },
+ "/ENTRY[entry]/INSTRUMENT[instrument]/SOURCE[source]/ratio_k_alphatwo_k_alphaone": {"xrdml_1.5": {"value": "",
+ "@units": ""}
+ },
+ "/ENTRY[entry]/INSTRUMENT[instrument]/SOURCE[source]/xray_tube_current": {"xrdml_1.5": {"value": "/xrdMeasurements/xrdMeasurement/incidentBeamPath/xRayTube/current",
+ "@units": "/xrdMeasurements/xrdMeasurement/incidentBeamPath/xRayTube/current/unit"}
+ },
+ "/ENTRY[entry]/INSTRUMENT[instrument]/SOURCE[source]/source_peak_wavelength": {"xrdml_1.5": {"value": "",
+ "@units": ""}
+ },
+ "/ENTRY[entry]/INSTRUMENT[instrument]/SOURCE[source]/xray_tube_material": {"xrdml_1.5": {"value": "/xrdMeasurements/xrdMeasurement/incidentBeamPath/xRayTube/anodeMaterial",
+ "@units": ""},
+ },
+ "/ENTRY[entry]/INSTRUMENT[instrument]/SOURCE[source]/xray_tube_voltage": {"xrdml_1.5": {"value": "/xrdMeasurements/xrdMeasurement/incidentBeamPath/xRayTube/tension",
+ "@units": "/xrdMeasurements/xrdMeasurement/incidentBeamPath/xRayTube/tension/unit"}
+ },
+ "/ENTRY[entry]/SAMPLE[sample]/prepared_by": {"xrdml_1.5": {"value": ""}
+ },
+ "/ENTRY[entry]/SAMPLE[sample]/sample_id": {"xrdml_1.5": {"value": ""},
+ },
+ "/ENTRY[entry]/SAMPLE[sample]/sample_mode": {"xrdml_1.5": {"value": ""},
+ },
+ "/ENTRY[entry]/SAMPLE[sample]/sample_name": {"xrdml_1.5": {"value": ""},
+ },
+ "/ENTRY[entry]/definition": "NXxrd_pan",
+ "/ENTRY[entry]/method": "X-Ray Diffraction (XRD)",
+ "/ENTRY[entry]/q_plot/intensity": {"xrdml_1.5": {"value": "/xrdMeasurements/xrdMeasurement/scan/dataPoints/intensities",
+ "@units": "counts/s"},
+ },
+ "/ENTRY[entry]/q_plot/q": {"xrdml_1.5": {"value": "",
+ "@units": ""},
+ },
+ "/@default": "entry",
+ "/ENTRY[entry]/@default": "2theta_plot",
+}
diff --git a/pynxtools/dataconverter/readers/xrd/reader.py b/pynxtools/dataconverter/readers/xrd/reader.py
new file mode 100644
index 000000000..242498790
--- /dev/null
+++ b/pynxtools/dataconverter/readers/xrd/reader.py
@@ -0,0 +1,176 @@
+"""XRD reader."""
+# Copyright The NOMAD Authors.
+#
+# This file is part of NOMAD. See https://nomad-lab.eu for further info.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+from typing import Tuple, Any, Dict, Union
+import json
+from pathlib import Path
+import xml.etree.ElementTree as ET
+
+import yaml
+
+from pynxtools.dataconverter.helpers import (generate_template_from_nxdl,
+ validate_data_dict)
+from pynxtools.dataconverter.template import Template
+from pynxtools.dataconverter.readers.xrd.xrd_parser import parse_and_fill_template
+from pynxtools.dataconverter.readers.utils import flatten_and_replace, FlattenSettings
+from pynxtools.dataconverter.readers.base.reader import BaseReader
+
+CONVERT_DICT: Dict[str, str] = {
+ 'unit': '@units',
+ 'Instrument': 'INSTRUMENT[instrument]',
+ 'Source': 'SOURCE[source]',
+ 'Detector': 'DETECTOR[detector]',
+ 'Collection': 'COLLECTION[collection]',
+ 'Sample': 'SAMPLE[sample]',
+ 'version': '@version',
+ 'User': 'USER[user]',
+}
+
+
+# Global var to collect the root from get_template_from_nxdl_name()
+# and use it in the the the varidate_data_dict()
+ROOT: ET.Element = None
+REPLACE_NESTED: Dict[str, Any] = {}
+XRD_FILE_EXTENSIONS = [".xrdml", "xrdml", ".udf", ".raw", ".xye"]
+
+
+def get_template_from_nxdl_name(nxdl_name):
+ """Generate template from nxdl name.
+
+ Example of nxdl name could be NXxrd_pan.
+ Parameters
+ ----------
+ nxdl_name : str
+ Name of nxdl file e.g. NXmpes
+
+ Returns
+ -------
+ Template
+ Empty template.
+
+ Raises
+ ------
+ ValueError
+ Error if nxdl file is not found.
+ """
+ nxdl_file = nxdl_name + ".nxdl.xml"
+ current_path = Path(__file__)
+ def_path = current_path.parent.parent.parent.parent / 'definitions'
+ # Check contributed defintions
+ full_nxdl_path = Path(def_path, 'contributed_definitions', nxdl_file)
+ root = None
+ if full_nxdl_path.exists():
+ root = ET.parse(full_nxdl_path).getroot()
+ else:
+ # Check application definition
+ full_nxdl_path = Path(def_path, 'applications', nxdl_file)
+
+ if root is None and full_nxdl_path.exists():
+ root = ET.parse(full_nxdl_path).getroot()
+ else:
+ full_nxdl_path = Path(def_path, 'base_classes', nxdl_file)
+
+ if root is None and full_nxdl_path.exists():
+ root = ET.parse(full_nxdl_path).getroot()
+ elif root is None:
+ raise ValueError("Need correct NXDL name")
+
+ template = Template()
+ generate_template_from_nxdl(root=root, template=template)
+ return template
+
+
+def get_template_from_xrd_reader(nxdl_name, file_paths):
+ """Get filled template from reader.
+
+ Parameters
+ ----------
+ nxdl_name : str
+ Name of nxdl definition
+ file_paths : Tuple[str]
+ Tuple of path of files.
+
+ Returns
+ -------
+ Template
+ Template which is a map from NeXus concept path to value.
+ """
+
+ template = get_template_from_nxdl_name(nxdl_name)
+
+ data = XRDReader().read(template=template,
+ file_paths=file_paths)
+ validate_data_dict(template=template, data=data, nxdl_root=ROOT)
+ return data
+
+
+# pylint: disable=too-few-public-methods
+class XRDReader(BaseReader):
+ """Reader for XRD."""
+
+ supported_nxdls = ["NXxrd_pan"]
+
+ def read(self,
+ template: dict = None,
+ file_paths: Tuple[str] = None,
+ objects: Tuple[Any] = None):
+ """General read menthod to prepare the template."""
+
+ if not isinstance(file_paths, tuple) and not isinstance(file_paths, list):
+ file_paths = (file_paths,)
+ filled_template: Union[Dict, None] = Template()
+ eln_dict: Union[Dict[str, Any], None] = None
+ config_dict: Dict = {}
+ xrd_file: str = ""
+ xrd_file_ext: str = ""
+ for file in file_paths:
+ ext = "".join(Path(file).suffixes)
+ if ext == '.json':
+ with open(file, mode="r", encoding="utf-8") as fl_obj:
+ config_dict = json.load(fl_obj)
+ elif ext in ['.yaml', '.yml']:
+ with open(file, mode="r", encoding="utf-8") as fl_obj:
+ eln_dict = flatten_and_replace(
+ FlattenSettings(
+ yaml.safe_load(fl_obj),
+ CONVERT_DICT, REPLACE_NESTED
+ )
+ )
+ elif ext in XRD_FILE_EXTENSIONS:
+ xrd_file_ext = ext
+ xrd_file = file
+ if xrd_file:
+ parse_and_fill_template(template, xrd_file, config_dict, eln_dict)
+ else:
+ raise ValueError(f"Allowed XRD experimental with extenstion from"
+ f" {XRD_FILE_EXTENSIONS} found {xrd_file_ext}")
+
+ # Get rid of empty concept and cleaning up Template
+ for key, val in template.items():
+
+ if val is None:
+ del template[key]
+ else:
+ filled_template[key] = val
+ if not filled_template.keys():
+ raise ValueError("Reader could not read anything! Check for input files and the"
+ " corresponding extention.")
+ return filled_template
+
+
+READER = XRDReader
diff --git a/pynxtools/dataconverter/readers/xrd/xrd_helper.py b/pynxtools/dataconverter/readers/xrd/xrd_helper.py
new file mode 100644
index 000000000..40874be50
--- /dev/null
+++ b/pynxtools/dataconverter/readers/xrd/xrd_helper.py
@@ -0,0 +1,293 @@
+"""XRD helper stuffs."""
+
+# Copyright The NOMAD Authors.
+#
+# This file is part of NOMAD. See https://nomad-lab.eu for further info.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import warnings
+import numpy as np
+from pynxtools.dataconverter.helpers import transform_to_intended_dt
+from pynxtools.dataconverter.template import Template
+
+
+class KeyValueNotFoundWaring(Warning):
+ """New Wanrning class"""
+
+
+def get_a_value_or_warn(return_value="",
+ warning_catagory=KeyValueNotFoundWaring,
+ message="Key-value not found.",
+ stack_level=2):
+ """It returns a value that and rase the warning massage."""
+
+ warnings.warn(f"\033[1;31m {message}:\033[0m]", warning_catagory, stack_level)
+ return return_value
+
+
+def check_unit(unit: str):
+ """Handle conflicted unit.
+ Some units comes with verdor file that do not follow correct format.
+ """
+ if unit is None:
+ return unit
+ unit_map = {'Angstrom': '\u212B',
+ }
+ correct_unit = unit_map.get(unit, None)
+ if correct_unit is None:
+ return unit
+ return correct_unit
+
+
+# pylint: disable=too-many-statements
+def feed_xrdml_to_template(template, xrd_dict, eln_dict, file_term, config_dict=None):
+ """Fill template with data from xrdml type file.
+
+ Parameters
+ ----------
+ template : Dict
+ Template generated from nxdl definition file.
+ xrd_dict : dict
+ Just a dict mapping slash separated key to the data. The key is equivalent to the
+ path directing the location in data file.
+ eln_dict : dict
+ That brings the data from user especially using NeXus according to NeXus concept.
+ file_term : str
+ Terminological string to describe file ext. and version (e.g. xrdml_1.5) to find proper
+ dict from config file.
+ config_dict : Dict
+ Dictionary from config file that maps NeXus concept to data from different data file
+ versions. E.g.
+ {
+ "/ENTRY[entry]/2theta_plot/chi": {"file_exp": {"value": "",
+ "@units": ""},},
+ "/ENTRY[entry]/2theta_plot/intensity": {"file_exp": {"value": "/detector",
+ "@units": ""},}
+ }
+ """
+
+ def fill_template_from_config_data(config_dict: dict, template: Template,
+ xrd_dict: dict, file_term: str) -> None:
+ """
+ Parameters
+ ----------
+ config_dict : dict
+ Python dict that is nested dict for different file versions.
+ e.g.
+ {"/ENTRY[entry]/2theta_plot/chi": {"file_exp": {"value": "",
+ "@units": ""},},
+ "/ENTRY[entry]/2theta_plot/intensity": {"file_exp": {"value": "/detector",
+ "@units": ""},}
+ }
+ template : Template
+
+ Return
+ ------
+ None
+ """
+ for nx_key, val in config_dict.items():
+ if isinstance(val, dict):
+ raw_data_des: dict = val.get(file_term, None)
+ if raw_data_des is None:
+ raise ValueError(f"conflict file config file does not have any data map"
+ f" for file {file_term}")
+ # the field does not have any value
+ if not raw_data_des.get('value', None):
+ continue
+ # Note: path is the data path in raw file
+ for val_atr_key, path in raw_data_des.items():
+ # data or field val
+ if val_atr_key == 'value':
+ template[nx_key] = xrd_dict.get(path, None)
+ elif path and val_atr_key == '@units':
+ template[nx_key + '/' + val_atr_key] = check_unit(
+ xrd_dict.get(path, None))
+ # attr e.g. @AXISNAME
+ elif path and val_atr_key.startswith('@'):
+ template[nx_key + '/' + val_atr_key] = xrd_dict.get(path, None)
+ if not isinstance(val, dict) and isinstance(val, str):
+ template[nx_key] = val
+
+ def two_theta_plot():
+
+ intesity = transform_to_intended_dt(template.get("/ENTRY[entry]/2theta_plot/intensity",
+ None))
+ if intesity is not None:
+ intsity_len = np.shape(intesity)[0]
+ else:
+ raise ValueError("No intensity is found")
+
+ two_theta_gr = "/ENTRY[entry]/2theta_plot/"
+ if template.get(f"{two_theta_gr}omega", None) is None:
+ omega_start = template.get("/ENTRY[entry]/COLLECTION[collection]/omega/start", None)
+ omega_end = template.get("/ENTRY[entry]/COLLECTION[collection]/omega/end", None)
+
+ template["/ENTRY[entry]/2theta_plot/omega"] = np.linspace(float(omega_start),
+ float(omega_end),
+ intsity_len)
+
+ if template.get(f"{two_theta_gr}two_theta", None) is None:
+ tw_theta_start = template.get("/ENTRY[entry]/COLLECTION[collection]/2theta/start",
+ None)
+ tw_theta_end = template.get("/ENTRY[entry]/COLLECTION[collection]/2theta/end", None)
+ template[f"{two_theta_gr}two_theta"] = np.linspace(float(tw_theta_start),
+ float(tw_theta_end),
+ intsity_len)
+ template[two_theta_gr + "/" + "@axes"] = ["two_theta"]
+ template[two_theta_gr + "/" + "@signal"] = "intensity"
+
+ def q_plot():
+ q_plot_gr = "/ENTRY[entry]/q_plot"
+ alpha_2 = template.get("/ENTRY[entry]/INSTRUMENT[instrument]/SOURCE[source]/k_alpha_two",
+ None)
+ alpha_1 = template.get("/ENTRY[entry]/INSTRUMENT[instrument]/SOURCE[source]/k_alpha_one",
+ None)
+ two_theta: np.ndarray = template.get("/ENTRY[entry]/2theta_plot/two_theta", None)
+ if two_theta is None:
+ raise ValueError("Two-theta data is not found")
+ if isinstance(two_theta, np.ndarray):
+ theta: np.ndarray = two_theta / 2
+ ratio_k = "/ENTRY[entry]/INSTRUMENT[instrument]/SOURCE[source]/ratio_k_alphatwo_k_alphaone"
+ if alpha_1 and alpha_2:
+ ratio = alpha_2 / alpha_1
+ template[ratio_k] = ratio
+ lamda = ratio * alpha_1 + (1 - ratio) * alpha_2
+ q_vec = (4 * np.pi / lamda) * np.sin(np.deg2rad(theta))
+ template[q_plot_gr + "/" + "q_vec"] = q_vec
+ template[q_plot_gr + "/" + "@q_vec_indicies"] = 0
+ template[q_plot_gr + "/" + "@axes"] = ["q_vec"]
+
+ template[q_plot_gr + "/" + "@signal"] = "intensity"
+
+ def handle_special_fields():
+ """Some fields need special treatment."""
+
+ key = "/ENTRY[entry]/COLLECTION[collection]/goniometer_x"
+ gonio_x = template.get(key, None)
+
+ template[key] = gonio_x[0] if (isinstance(gonio_x, np.ndarray)
+ and gonio_x.shape == (1,)) else gonio_x
+
+ key = "/ENTRY[entry]/COLLECTION[collection]/goniometer_y"
+ gonio_y = template.get(key, None)
+
+ template[key] = gonio_y[0] if (isinstance(gonio_y, np.ndarray)
+ and gonio_y.shape == (1,)) else gonio_y
+
+ key = "/ENTRY[entry]/COLLECTION[collection]/goniometer_z"
+ gonio_z = template.get(key, None)
+
+ template[key] = gonio_z[0] if (isinstance(gonio_z, np.ndarray)
+ and gonio_z.shape == (1,)) else gonio_z
+
+ key = "/ENTRY[entry]/COLLECTION[collection]/count_time"
+ count_time = template.get(key, None)
+
+ template[key] = count_time[0] if (isinstance(count_time, np.ndarray)
+ and count_time.shape == (1,)) else count_time
+
+ fill_template_from_config_data(config_dict, template,
+ xrd_dict, file_term)
+ two_theta_plot()
+ q_plot()
+ handle_special_fields()
+
+ fill_template_from_eln_data(eln_dict, template)
+
+
+# pylint: disable=unused-argument
+def feed_udf_to_template(template, xrd_dict, eln_dict, config_dict):
+ """_summary_
+
+ Parameters
+ ----------
+ template : _type_
+ _description_
+ xrd_dict : _type_
+ _description_
+ eln_dict : _type_
+ _description_
+ config_dict : _type_
+ _description_
+ """
+
+
+def feed_raw_to_template(template, xrd_dict, eln_dict, config_dict):
+ """_summary_
+
+ Parameters
+ ----------
+ template : _type_
+ _description_
+ xrd_dict : _type_
+ _description_
+ eln_dict : _type_
+ _description_
+ config_dict : _type_
+ _description_
+ """
+
+
+def feed_xye_to_template(template, xrd_dict, eln_dict, config_dict):
+ """_summary_
+
+ Parameters
+ ----------
+ template : _type_
+ _description_
+ xrd_dict : _type_
+ _description_
+ eln_dict : _type_
+ _description_
+ config_dict : _type_
+ _description_
+ """
+
+
+def fill_template_from_eln_data(eln_data_dict, template):
+ """Fill out the template from dict that generated from eln yaml file.
+ Parameters:
+ -----------
+ eln_data_dict : dict[str, Any]
+ Python dictionary from eln file.
+ template : dict[str, Any]
+ Return:
+ -------
+ None
+ """
+
+ if eln_data_dict is None:
+ return
+ for e_key, e_val in eln_data_dict.items():
+ template[e_key] = transform_to_intended_dt(e_val)
+
+
+def fill_nxdata_from_xrdml(template,
+ xrd_flattend_dict,
+ dt_nevigator_from_config_file,
+ data_group_concept
+ ):
+ """_summary_
+
+ Parameters
+ ----------
+ template : _type_
+ _description_
+ xrd_flattend_dict : _type_
+ _description_
+ dt_nevigator_from_config_file : _type_
+ _description_
+ data_group_concept : _type_
+ _description_
+ """
diff --git a/pynxtools/dataconverter/readers/xrd/xrd_parser.py b/pynxtools/dataconverter/readers/xrd/xrd_parser.py
new file mode 100644
index 000000000..9d944cad7
--- /dev/null
+++ b/pynxtools/dataconverter/readers/xrd/xrd_parser.py
@@ -0,0 +1,448 @@
+"""
+XRD file parser collection.
+"""
+
+# Copyright The NOMAD Authors.
+#
+# This file is part of NOMAD. See https://nomad-lab.eu for further info.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from typing import Dict, Tuple, Optional, List
+
+from pathlib import Path
+import warnings
+import xml.etree.ElementTree as ET # for XML parsing
+from pynxtools.dataconverter.helpers import transform_to_intended_dt, remove_namespace_from_tag
+from pynxtools.dataconverter.readers.xrd.xrd_helper import feed_xrdml_to_template
+
+
+def fill_slash_sep_dict_from_nested_dict(parent_path: str, nested_dict: dict,
+ slash_sep_dict: dict):
+ """Convert a nested dict into slash separated dict.
+
+ Extend slash_sep_dict by key (slash separated key) from nested dict.
+
+ Parameters
+ ----------
+ parent_path : str
+ Parent path to be appended at the starting of slash separated key.
+ nested_dict : dict
+ Dict nesting other dict.
+ slash_sep_dict : dict
+ Plain dict to be extended by key value generated from nested_dict.
+ """
+ for key, val in nested_dict.items():
+ slash_sep_path = parent_path + key
+ if isinstance(val, dict):
+ fill_slash_sep_dict_from_nested_dict(slash_sep_path, val, slash_sep_dict)
+ else:
+ slash_sep_dict[slash_sep_path] = val
+
+
+class IgnoreNodeTextWarning(Warning):
+ """Special class to warn node text skip."""
+
+
+class XRDMLParser:
+ """Parser for xrdml file with the help of other XRD library e.g. panalytical_xml."""
+
+ def __init__(self, file_path):
+ """Construct XRDMLParser obj.
+
+ Parameters
+ ----------
+ file_path : str
+ Path of the file.
+ """
+ # In future it can be utilised later it different versions of file
+ # self.__version = None
+ self.__xrd_dict = {}
+ self.__file_path = file_path
+ self.xrdml_version: str = ""
+ self.xml_root = ET.parse(self.__file_path).getroot()
+ self.find_version()
+ # Important note for key-val pair separator list: preceding elements have precedence on the
+ # on the following elements
+ self.key_val_pair_sprtr = (';', ',')
+ # Important note for key-val separator list: preceding elements have precedence on the
+ # on the following elements
+ self.key_val_sprtr = ('=', ':')
+
+ def find_version(self):
+ """To find xrdml file version."""
+ schema_loc = "{http://www.w3.org/2001/XMLSchema-instance}schemaLocation"
+ # str: 'http://www.xrdml.com/XRDMeasurement/1.5
+ version = self.xml_root.get(schema_loc).split(' ')[0]
+ self.xrdml_version = version.split('/')[-1]
+
+ def get_slash_separated_xrd_dict(self):
+ """Return a dict with slash separated key and value from xrd file.
+
+ The key is the slash separated string path for nested xml elements.
+
+ Returns
+ -------
+ dict:
+ Dictionary where key maps xml nested elements by slash separated str.
+ """
+ # To navigate different functions in future according to some parameters
+ # such as version, and data analysis module from panalytical_xml
+ self.handle_with_panalytical_module()
+ return self.__xrd_dict
+
+ def handle_with_panalytical_module(self):
+ """Handeling XRDml file by parsing xml file and Pnanalytical_xml parser
+
+ Panalytical module extends and constructs some array data from experiment settings
+ comes with xml file.
+ """
+ self.parse_each_elm(parent_path='/', xml_node=self.xml_root)
+ nested_data_dict: Dict[str, any] = {}
+ # Note: To use panalytical lib
+ # Extract other numerical data e.g. 'hkl', 'Omega', '2Theta', CountTime etc
+ # using panalytical_xml module
+ # parsed_data = XRDMLFile(self.__file_path)
+ # nested_data_dict = parsed_data.scan.ddict
+ fill_slash_sep_dict_from_nested_dict('/', nested_data_dict, self.__xrd_dict)
+
+ def process_node_text(self, parent_path, node_txt) -> None:
+ """Processing text of node
+
+ Parameters
+ ----------
+ parent_path : str
+ Starting str of the key when forming a string key.
+ node_txt : str
+ text from node.
+
+ Returns
+ ------
+ None
+ """
+ key_val_pairs = []
+ # get key-val pair
+ for sep in self.key_val_pair_sprtr:
+ if sep in node_txt:
+ key_val_pairs.extend(node_txt.split(sep))
+ break
+ # Separate key-val, build full path and
+ # store them in dict
+ if key_val_pairs:
+ for key_val in key_val_pairs:
+ for k_v_sep in self.key_val_sprtr:
+ if k_v_sep in key_val:
+ key, val = key_val.split(k_v_sep)
+ key = key.replace(' ', '')
+ self.__xrd_dict['/'.join([parent_path, key])] = val
+ break
+ # Handling array data comes as node text
+ else:
+ try:
+ self.__xrd_dict[parent_path] = transform_to_intended_dt(node_txt)
+ except ValueError:
+ warnings.warn(f'Element text {node_txt} is ignored from parseing!',
+ IgnoreNodeTextWarning)
+
+ def parse_each_elm(self, parent_path, xml_node,
+ multi_childs_tag: str = '',
+ tag_extensions: Optional[List[int]] = None):
+ """Check each xml element and send the element to intended function.
+
+ Parameters
+ ----------
+ parent_path : str
+ Path to be in the starting of the key composing from element e.g. '/'.
+ xml_node : XML.Element
+ Any element except process instruction nodes.
+ multi_childs_tag : str
+ Tag that is available on several child nodes.
+ tag_extension : List[int]
+ List of extension of the child tag if there are several childs having the same
+ tag.
+
+ Returns
+ ------
+ None
+ """
+
+ tag = remove_namespace_from_tag(xml_node.tag)
+ # Take care of special node of 'entry' tag
+ if tag == 'entry':
+ parent_path = self.parse_entry_elm(parent_path, xml_node,
+ multi_childs_tag, tag_extensions)
+ else:
+ parent_path = self.parse_general_elm(parent_path, xml_node,
+ multi_childs_tag, tag_extensions)
+
+ _, multi_childs_tag = self.has_multi_childs_with_same_tag(xml_node)
+ # List of tag extensions for child nodes which have the same tag.
+ tag_extensions = [0]
+ for child in iter(xml_node):
+ if child is not None:
+ self.parse_each_elm(parent_path, child,
+ multi_childs_tag, tag_extensions)
+
+ def has_multi_childs_with_same_tag(self, parent_node: ET.Element) -> Tuple[bool, str]:
+ """Check for multiple childs that have the same tag.
+
+ Parameter:
+ ----------
+ parent_node : ET.Element
+ Parent node that might has multiple childs with the same tag.
+
+ Returns:
+ --------
+ Tuple[bool, str]
+ (true if multiple childs with the same tag, tag).
+ """
+ tag: str = None
+ for child in iter(parent_node):
+ temp_tag = remove_namespace_from_tag(child.tag)
+ if tag is None:
+ tag = temp_tag
+ else:
+ if tag == temp_tag:
+ return (True, tag)
+
+ return (False, '')
+
+ def parse_general_elm(self, parent_path, xml_node,
+ multi_childs_tag, tag_extensions: List[int]):
+ """Handle general element except entry element.
+ Parameters
+ ----------
+ parent_path : str
+ Path to be in the starting of the key composing from element e.g. '/'.
+ xml_node : XML.Element
+ Any element except process instruction and entry nodes.
+ multi_childs_tag : str
+ Tag that is available on several siblings.
+ tag_extension : List[int]
+ List of extension of the shiblings tag if there are several shiblings having
+ the same tag.
+
+ Returns
+ -------
+ None
+ """
+
+ tag = remove_namespace_from_tag(xml_node.tag)
+ if tag == multi_childs_tag:
+ new_ext = tag_extensions[-1] + 1
+ tag = tag + '_' + str(new_ext)
+ tag_extensions.append(new_ext)
+
+ if parent_path == '/':
+ parent_path = parent_path + tag
+ else:
+ # New parent path ends with element tag
+ parent_path = '/'.join([parent_path, tag])
+
+ node_attr = xml_node.attrib
+ if node_attr:
+ for key, val in node_attr.items():
+ # Some attr has namespace
+ key = remove_namespace_from_tag(key)
+ key = key.replace(' ', '_')
+ path_extend = '/'.join([parent_path, key])
+ self.__xrd_dict[path_extend] = val
+
+ node_txt = xml_node.text
+ if node_txt:
+ self.process_node_text(parent_path, node_txt)
+
+ return parent_path
+
+ def parse_entry_elm(self, parent_path: str, xml_node: ET.Element,
+ multi_childs_tag: str, tag_extensions: List[int]):
+ """Handle entry element.
+
+ Parameters
+ ----------
+ parent_path : str
+ Path to be in the starting of the key composing from element e.g. '/'.
+ xml_node : XML.Element
+ Any entry node.
+ multi_childs_tag : str
+ Tag that is available on several siblings.
+ tag_extension : List[int]
+ List of extension of the shiblings tag if there are several shiblings having
+ the same tag.
+
+ Returns
+ -------
+ str:
+ Parent path.
+ """
+
+ tag = remove_namespace_from_tag(xml_node.tag)
+
+ if tag == multi_childs_tag:
+ new_ext = tag_extensions[-1] + 1
+ tag_extensions.append(new_ext)
+ tag = tag + '_' + str(new_ext)
+
+ if parent_path == '/':
+ parent_path = '/' + tag
+ else:
+ # Parent path ends with element tag
+ parent_path = '/'.join([parent_path, tag])
+
+ node_attr = xml_node.attrib
+ if node_attr:
+ for key, val in node_attr.items():
+ # Some attributes have namespace
+ key = remove_namespace_from_tag(key)
+ path_extend = '/'.join([parent_path, key])
+ self.__xrd_dict[path_extend] = val
+
+ # In entry element text must get special care on it
+ node_txt = xml_node.text
+ if node_txt:
+ self.process_node_text(parent_path, node_txt)
+
+ return parent_path
+
+
+class FormatParser:
+ """A class to identify and parse different file formats."""
+
+ def __init__(self, file_path):
+ """Construct FormatParser obj.
+
+ Parameters
+ ----------
+ file_path : str
+ XRD file to be parsed.
+
+ Returns
+ -------
+ None
+ """
+ self.file_path = file_path
+ self.file_parser = XRDMLParser(self.file_path)
+ # termilnological name of file to read config file
+ self.file_term = 'xrdml_' + self.file_parser.xrdml_version
+
+ def get_file_format(self):
+ """Identifies the format of a given file.
+
+ Returns:
+ --------
+ str:
+ The file extension of the file.
+ """
+ file_extension = ''.join(Path(self.file_path).suffixes)
+ return file_extension
+
+ def parse_xrdml(self):
+ """Parses a Panalytical XRDML file.
+
+ Returns
+ -------
+ dict
+ A dictionary containing the parsed XRDML data.
+ """
+ return self.file_parser.get_slash_separated_xrd_dict()
+
+ def parse_panalytical_udf(self):
+ """Parse the Panalytical .udf file.
+
+ Returns
+ -------
+ None
+ Placeholder for parsing .udf files.
+ """
+
+ def parse_bruker_raw(self):
+ """Parse the Bruker .raw file.
+
+ Returns
+ None
+ """
+
+ def parse_bruker_xye(self):
+ """Parse the Bruker .xye file.
+
+ Returns
+ None
+ """
+
+ # pylint: disable=import-outside-toplevel
+ def parse_and_populate_template(self, template, config_dict, eln_dict):
+ """Parse xrd file into dict and fill the template.
+
+ Parameters
+ ----------
+ template : Template
+ NeXus template generated from NeXus application definitions.
+ xrd_file : str
+ Name of the xrd file.
+ config_dict : dict
+ A dict geenerated from python
+ eln_dict : dict
+ A dict generatd from eln yaml file.
+ Returns:
+ None
+ """
+
+ xrd_dict = self.parse()
+ if len(config_dict) == 0 and self.file_parser.xrdml_version == '1.5':
+ from pynxtools.dataconverter.readers.xrd.config import xrdml
+ config_dict = xrdml
+ feed_xrdml_to_template(template, xrd_dict, eln_dict,
+ file_term=self.file_term, config_dict=config_dict)
+
+ def parse(self):
+ '''Parses the file based on its format.
+
+ Returns:
+ dict
+ A dictionary containing the parsed data.
+
+ Raises:
+ ValueError: If the file format is unsupported.
+ '''
+ file_format = self.get_file_format()
+ slash_sep_dict = {}
+ if file_format == ".xrdml":
+ slash_sep_dict = self.parse_xrdml()
+ # elif file_format == ".udf":
+ # return self.parse_panalytical_udf()
+ # elif file_format == ".raw":
+ # return self.parse_bruker_raw()
+ # elif file_format == ".xye":
+ # return self.parse_bruker_xye()
+ # else:
+ # raise ValueError(f"Unsupported file format: {file_format}")
+ return slash_sep_dict
+
+
+def parse_and_fill_template(template, xrd_file, config_dict, eln_dict):
+ """Parse xrd file and fill the template with data from that file.
+
+ Parameters
+ ----------
+ template : Template[dict]
+ Template gnenerated from nxdl definition.
+ xrd_file : str
+ Name of the xrd file with extension
+ config_dict : Dict
+ Dictionary from config.json or similar file.
+ eln_dict : Dict
+ Plain and '/' separated dictionary from yaml for ELN.
+ """
+
+ format_parser = FormatParser(xrd_file)
+ format_parser.parse_and_populate_template(template, config_dict, eln_dict)
diff --git a/pynxtools/dataconverter/template.py b/pynxtools/dataconverter/template.py
index 286cbaaed..fa6907d36 100644
--- a/pynxtools/dataconverter/template.py
+++ b/pynxtools/dataconverter/template.py
@@ -114,6 +114,24 @@ def get_documented(self):
"""Returns a dictionary of all the optionalities merged into one."""
return {**self.optional, **self.recommended, **self.required}
+ def __contains__(self, k):
+ """
+ Supports in operator for the nested Template keys
+ """
+ return any([
+ k in self.optional,
+ k in self.recommended,
+ k in self.undocumented,
+ k in self.required
+ ])
+
+ def get(self, key: str, default=None):
+ """Proxies the get function to our internal __getitem__"""
+ try:
+ return self[key]
+ except KeyError:
+ return default
+
def __getitem__(self, k):
"""Handles how values are accessed from the Template object."""
# Try setting item in all else throw error. Does not append to default.
@@ -130,7 +148,10 @@ def __getitem__(self, k):
return self.required[k]
except KeyError:
return self.undocumented[k]
- return self.get_optionality(k)
+ if k in ("required", "optional", "recommended", "undocumented"):
+ return self.get_optionality(k)
+ raise KeyError("Only paths starting with '/' or one of [optional_parents, "
+ "lone_groups, required, optional, recommended, undocumented] can be used.")
def clear(self):
"""Clears all data stored in the Template object."""
@@ -171,12 +192,15 @@ def add_entry(self, entry_name):
def __delitem__(self, key):
"""Delete a dictionary key or template key"""
-
if key in self.optional.keys():
del self.optional[key]
- if key in self.required.keys():
+ elif key in self.required.keys():
del self.required[key]
- if key in self.recommended.keys():
+ elif key in self.recommended.keys():
del self.recommended[key]
+ elif key in self.undocumented.keys():
+ del self.undocumented[key]
+ else:
+ raise KeyError(f"{key} does not exist.")
diff --git a/pynxtools/dataconverter/writer.py b/pynxtools/dataconverter/writer.py
index 486d48ace..81b3045da 100644
--- a/pynxtools/dataconverter/writer.py
+++ b/pynxtools/dataconverter/writer.py
@@ -105,6 +105,7 @@ def handle_shape_entries(data, file, path):
return layout
+# pylint: disable=too-many-locals, inconsistent-return-statements
def handle_dicts_entries(data, grp, entry_name, output_path, path):
"""Handle function for dictionaries found as value of the nexus file.
@@ -163,7 +164,13 @@ def handle_dicts_entries(data, grp, entry_name, output_path, path):
raise InvalidDictProvided("A dictionary was provided to the template but it didn't"
" fall into any of the know cases of handling"
" dictionaries. This occured for: " + entry_name)
- return grp[entry_name]
+ # Check whether link has been stabilished or not
+ try:
+ return grp[entry_name]
+ except KeyError:
+ logger.warning("No path '%s' available to be linked.", path)
+ del grp[entry_name]
+ return None
class Writer:
@@ -171,26 +178,27 @@ class Writer:
Args:
data (dict): Dictionary containing the data to convert.
- nxdl_path (str): Path to the nxdl file to use during conversion.
+ nxdl_f_path (str): Path to the nxdl file to use during conversion.
output_path (str): Path to the output NeXus file.
Attributes:
data (dict): Dictionary containing the data to convert.
- nxdl_path (str): Path to the nxdl file to use during conversion.
+ nxdl_f_path (str): Path to the nxdl file to use during conversion.
output_path (str): Path to the output NeXus file.
output_nexus (h5py.File): The h5py file object to manipulate output file.
nxdl_data (dict): Stores xml data from given nxdl file to use during conversion.
nxs_namespace (str): The namespace used in the NXDL tags. Helps search for XML children.
"""
- def __init__(self, data: dict = None, nxdl_path: str = None,
- output_path: str = None, io_mode: str = "w"):
+ def __init__(self, data: dict = None,
+ nxdl_f_path: str = None,
+ output_path: str = None):
"""Constructs the necessary objects required by the Writer class."""
self.data = data
- self.nxdl_path = nxdl_path
+ self.nxdl_f_path = nxdl_f_path
self.output_path = output_path
- self.output_nexus = h5py.File(self.output_path, io_mode)
- self.nxdl_data = ET.parse(self.nxdl_path).getroot()
+ self.output_nexus = h5py.File(self.output_path, "w")
+ self.nxdl_data = ET.parse(self.nxdl_f_path).getroot()
self.nxs_namespace = get_namespace(self.nxdl_data)
def __nxdl_to_attrs(self, path: str = '/') -> dict:
@@ -235,8 +243,9 @@ def ensure_and_get_parent_node(self, path: str, undocumented_paths) -> h5py.Grou
return grp
return self.output_nexus[parent_path_hdf5]
- def write(self):
- """Writes the NeXus file with previously validated data from the reader with NXDL attrs."""
+ def _put_data_into_hdf5(self):
+ """Store data in hdf5 in in-memory file or file."""
+
hdf5_links_for_later = []
def add_units_key(dataset, path):
@@ -274,6 +283,9 @@ def add_units_key(dataset, path):
for links in hdf5_links_for_later:
dataset = handle_dicts_entries(*links)
+ if dataset is None:
+ # If target of a link is invalid to be linked
+ del self.data[links[-1]]
for path, value in self.data.items():
try:
@@ -288,6 +300,7 @@ def add_units_key(dataset, path):
if entry_name[0] != "@":
path_hdf5 = helpers.convert_data_dict_path_to_hdf5_path(path)
+
add_units_key(self.output_nexus[path_hdf5], path)
else:
# consider changing the name here the lvalue can also be group!
@@ -297,4 +310,9 @@ def add_units_key(dataset, path):
raise IOError(f"Unknown error occured writing the path: {path} "
f"with the following message: {str(exc)}") from exc
- self.output_nexus.close()
+ def write(self):
+ """Writes the NeXus file with previously validated data from the reader with NXDL attrs."""
+ try:
+ self._put_data_into_hdf5()
+ finally:
+ self.output_nexus.close()
diff --git a/pynxtools/eln_mapper/README.md b/pynxtools/eln_mapper/README.md
new file mode 100644
index 000000000..13f759466
--- /dev/null
+++ b/pynxtools/eln_mapper/README.md
@@ -0,0 +1,19 @@
+# ELN generator
+This is a helper tool for generating eln
+- The simple eln generator that can be used in a console or jupyter-notebook
+- Scheme based eln generator that can be used in NOMAD and the eln can be used as a custom scheme in NOMAD.
+
+```
+$ eln_generator --options
+
+Options:
+ --nxdl TEXT Name of NeXus definition without extension
+ (.nxdl.xml). [required]
+ --skip-top-levels INTEGER To skip upto a level of parent hierarchical structure.
+ E.g. for default 1 the part Entry[ENTRY] from
+ /Entry[ENTRY]/Instrument[INSTRUMENT]/... will
+ be skiped. [default: 1]
+ --output-file TEXT Name of output file.
+ --eln-type [eln|scheme_eln] Choose a type from the eln or scheme_eln. [required]
+ --help Show this message and exit.
+```
diff --git a/pynxtools/dataconverter/readers/apm/utils/apm_utils.py b/pynxtools/eln_mapper/__init__.py
similarity index 59%
rename from pynxtools/dataconverter/readers/apm/utils/apm_utils.py
rename to pynxtools/eln_mapper/__init__.py
index f04c329ee..7f1819634 100644
--- a/pynxtools/dataconverter/readers/apm/utils/apm_utils.py
+++ b/pynxtools/eln_mapper/__init__.py
@@ -1,4 +1,3 @@
-#
# Copyright The NOMAD Authors.
#
# This file is part of NOMAD. See https://nomad-lab.eu for further info.
@@ -15,12 +14,3 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
-"""Set of utility tools for parsing file formats used by atom probe."""
-
-# pylint: disable=no-member
-
-# ifes_apt_tc_data_modeling replaces now the previously here stored
-# convenience functions which translated human-readable ion names into
-# isotope_vector descriptions and vice versa as proposed by M. Kuehbach et al. in
-# DOI: 10.1017/S1431927621012241 to the human-readable ion names which are use
-# in P. Felfer et al."s atom probe toolbox
diff --git a/pynxtools/eln_mapper/eln.py b/pynxtools/eln_mapper/eln.py
new file mode 100644
index 000000000..078dd4d18
--- /dev/null
+++ b/pynxtools/eln_mapper/eln.py
@@ -0,0 +1,189 @@
+"""For functions that directly or indirectly help to for rendering ELN.
+Note that this not schema eln that is rendered to Nomad rather the eln that
+is generated by schema eln."""
+
+# Copyright The NOMAD Authors.
+#
+# This file is part of NOMAD. See https://nomad-lab.eu for further info.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+import os
+import re
+from typing import Any, Dict
+import xml.etree.ElementTree as ET
+import yaml
+
+from pynxtools.dataconverter.helpers import generate_template_from_nxdl
+from pynxtools.dataconverter.template import Template
+from pynxtools.nexus.nexus import get_nexus_definitions_path
+
+
+def retrieve_nxdl_file(nexus_def: str) -> str:
+ """Retrive full path of nexus file.
+
+ Parameters
+ ----------
+ nexus_def : str
+ Name of nexus definition e.g. NXmpes
+
+ Returns
+ -------
+ str
+ Returns full path of file e.g. /NXmpes.nxdl.xml
+
+ Raises
+ ------
+ ValueError
+ Need correct definition name, e.g. NXmpes not NXmpes.nxdl.xml
+ """
+ definition_path = get_nexus_definitions_path()
+
+ def_path = os.path.join(definition_path,
+ 'contributed_definitions',
+ f"{nexus_def}.nxdl.xml")
+ if os.path.exists(def_path):
+ return def_path
+
+ def_path = os.path.join(definition_path,
+ 'base_definitions',
+ f"{nexus_def}.nxdl.xml")
+
+ if os.path.exists(def_path):
+ return def_path
+
+ def_path = os.path.join(definition_path,
+ 'applications',
+ f"{nexus_def}.nxdl.xml")
+ if os.path.exists(def_path):
+ return def_path
+
+ raise ValueError("Incorrect definition is rendered, try with correct definition name.")
+
+
+def get_empty_template(nexus_def: str) -> Template:
+ """Generate eln in yaml file.
+
+ Parameters
+ ----------
+ nexus_def : str
+ Name of NeXus definition e.g. NXmpes
+
+ Return
+ ------
+ Template
+ """
+
+ nxdl_file = retrieve_nxdl_file(nexus_def)
+ nxdl_root = ET.parse(nxdl_file).getroot()
+ template = Template()
+ generate_template_from_nxdl(nxdl_root, template)
+
+ return template
+
+
+def take_care_of_special_concepts(key: str):
+ """For some special concepts such as @units."""
+ def unit_concept():
+ return {'value': None,
+ 'unit': None}
+
+ if key == '@units':
+ return unit_concept()
+
+
+def get_recursive_dict(concatenated_key: str,
+ recursive_dict: Dict[str, Any],
+ level_to_skip: int) -> None:
+ """Get recursive dict for concatenated string of keys.
+
+ Parameters
+ ----------
+ concatenated_key : str
+ String of keys separated by slash
+ recursive_dict : dict
+ Dict to recursively stroring data.
+ level_to_skip : int
+ Integer to skip the level of hierarchical level
+ """
+ # splitig keys like: '/entry[ENTRY]/position[POSITION]/xx'.
+ # skiping the first empty '' and top parts as directed by users.
+ key_li = concatenated_key.split('/')[level_to_skip + 1:]
+ # list of key for special consideration
+ sp_key_li = ['@units']
+ last_key = ""
+ last_dict = {}
+ for key in key_li:
+ if '[' in key and '/' not in key:
+ key = re.findall(r'\[(.*?)\]', key,)[0].capitalize()
+ if not key:
+ continue
+ last_key = key
+ last_dict = recursive_dict
+ if key in recursive_dict:
+ if recursive_dict[key] is None:
+ recursive_dict[key] = {}
+ recursive_dict = recursive_dict[key]
+
+ else:
+ if key in sp_key_li:
+ recursive_dict.update(take_care_of_special_concepts(key))
+ else:
+ recursive_dict = recursive_dict[key]
+ else:
+ if key in sp_key_li:
+ recursive_dict.update(take_care_of_special_concepts(key))
+ else:
+ recursive_dict[key] = {}
+ recursive_dict = recursive_dict[key]
+ # For special key cleaning parts occurs inside take_care_of_special_concepts func.
+ if last_key not in sp_key_li:
+ last_dict[last_key] = None
+
+
+def generate_eln(nexus_def: str, eln_file: str = '', level_to_skip: int = 1) -> None:
+ """Genrate eln from application definition.
+
+ Parameters
+ ----------
+ nexus_def : str
+ _description_
+ eln_file : str
+ _description_
+
+ Returns:
+ None
+ """
+
+ template = get_empty_template(nexus_def)
+ recursive_dict: Dict[str, Any] = {}
+ for key, _ in template.items():
+ get_recursive_dict(key, recursive_dict, level_to_skip)
+
+ name_split = eln_file.rsplit('.')
+ if not eln_file:
+ if nexus_def[0:2] == 'NX':
+ raw_name = nexus_def[2:]
+ eln_file = raw_name + '.yaml'
+
+ elif len(name_split) == 1:
+ eln_file = eln_file + '.yaml'
+
+ elif len(name_split) == 2 and name_split[1] == 'yaml':
+ pass
+ else:
+ raise ValueError("Eln file should come with 'yaml' extension or without extension.")
+
+ with open(eln_file, encoding='utf-8', mode='w') as eln_f:
+ yaml.dump(recursive_dict, sort_keys=False, stream=eln_f)
diff --git a/pynxtools/eln_mapper/eln_mapper.py b/pynxtools/eln_mapper/eln_mapper.py
new file mode 100644
index 000000000..d23918f73
--- /dev/null
+++ b/pynxtools/eln_mapper/eln_mapper.py
@@ -0,0 +1,75 @@
+"""This module Generate ELN in a hierarchical format according to NEXUS definition."""
+# Copyright The NOMAD Authors.
+#
+# This file is part of NOMAD. See https://nomad-lab.eu for further info.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+import click
+from pynxtools.eln_mapper.eln import generate_eln
+from pynxtools.eln_mapper.scheme_eln import generate_scheme_eln
+
+
+@click.command()
+@click.option(
+ '--nxdl',
+ required=True,
+ help="Name of NeXus definition without extension (.nxdl.xml)."
+)
+@click.option(
+ '--skip-top-levels',
+ default=1,
+ required=False,
+ type=int,
+ show_default=True,
+ help=("To skip the level of parent hierarchy level. E.g. for default 1 the part"
+ "Entry[ENTRY] from /Entry[ENTRY]/Instrument[INSTRUMENT]/... will be skiped.")
+)
+@click.option(
+ '--output-file',
+ required=False,
+ default='eln_data',
+ help=('Name of file that is neede to generated output file.')
+)
+@click.option(
+ '--eln-type',
+ required=True,
+ type=click.Choice(['eln', 'scheme_eln'], case_sensitive=False),
+ default='eln'
+)
+def get_eln(nxdl: str,
+ skip_top_levels: int,
+ output_file: str,
+ eln_type: str):
+ """To generate ELN in yaml file format.
+
+ Parameters
+ ----------
+
+ nxdl : str
+ Name of NeXus definition e.g. NXmpes
+ skip_top_levels : int
+ To skip hierarchical levels
+ output_file : str
+ Name of the output file.
+ """
+ eln_type = eln_type.lower()
+ if eln_type == 'eln':
+ generate_eln(nxdl, output_file, skip_top_levels)
+ elif eln_type == 'scheme_eln':
+ generate_scheme_eln(nxdl, eln_file_name=output_file)
+
+
+if __name__ == "__main__":
+ get_eln().parse() # pylint: disable=no-value-for-parameter
diff --git a/pynxtools/eln_mapper/scheme_eln.py b/pynxtools/eln_mapper/scheme_eln.py
new file mode 100644
index 000000000..1152bbd08
--- /dev/null
+++ b/pynxtools/eln_mapper/scheme_eln.py
@@ -0,0 +1,281 @@
+"""This module intended to generate schema eln which usually randeredto NOMAD."""
+
+# Copyright The NOMAD Authors.
+#
+# This file is part of NOMAD. See https://nomad-lab.eu for further info.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+from typing import Dict, Any
+import xml.etree.ElementTree as ET
+import yaml
+from pynxtools.eln_mapper.eln import retrieve_nxdl_file
+from pynxtools.dataconverter.helpers import remove_namespace_from_tag
+
+
+NEXUS_TYPE_TO_NUMPY_TYPE = {'NX_CHAR': {'convert_typ': 'str',
+ 'component_nm': 'StringEditQuantity',
+ 'default_unit_display': ''},
+ 'NX_BOOLEAN': {'convert_typ': 'bool',
+ 'component_nm': 'BoolEditQuantity',
+ 'default_unit_display': ''},
+ 'NX_DATE_TIME': {'convert_typ': 'Datetime',
+ 'component_nm': 'DateTimeEditQuantity',
+ 'default_unit_display': ''},
+ 'NX_FLOAT': {'convert_typ': 'np.float64',
+ 'component_nm': 'NumberEditQuantity',
+ 'default_unit_display': ''},
+ 'NX_INT': {'convert_typ': 'int',
+ 'component_nm': 'NumberEditQuantity',
+ 'default_unit_display': ''},
+ 'NX_NUMBER': {'convert_typ': 'np.float64',
+ 'component_nm': 'NumberEditQuantity',
+ 'default_unit_display': ''},
+ '': {'convert_typ': '',
+ 'component_nm': '',
+ 'default_unit_display': ''},
+ }
+
+
+def construct_field_structure(fld_elem, quntities_dict):
+ """Construct field structure such as unit, value.
+ Parameters
+ ----------
+ elem : _type_
+ _description_
+ quntities_dict : _type_
+ _description_
+ """
+ elm_attr = fld_elem.attrib
+ fld_nm = elm_attr['name'].lower()
+ quntities_dict[fld_nm] = {}
+ fld_dict = quntities_dict[fld_nm]
+
+ # handle type
+ if 'type' in elm_attr:
+ nx_fld_typ = elm_attr['type']
+ else:
+ nx_fld_typ = 'NX_CHAR'
+
+ if nx_fld_typ in NEXUS_TYPE_TO_NUMPY_TYPE:
+ cov_fld_typ = NEXUS_TYPE_TO_NUMPY_TYPE[nx_fld_typ]['convert_typ']
+
+ fld_dict['type'] = cov_fld_typ
+ if 'units' in elm_attr:
+ fld_dict['unit'] = f""
+ fld_dict['value'] = ""
+
+ # handle m_annotation
+ m_annotation = {'m_annotations': {'eln':
+ {'component':
+ NEXUS_TYPE_TO_NUMPY_TYPE[nx_fld_typ]['component_nm'],
+ 'defaultDisplayUnit':
+ (NEXUS_TYPE_TO_NUMPY_TYPE[nx_fld_typ]
+ ['default_unit_display'])}}}
+ fld_dict.update(m_annotation)
+
+ # handle description
+ construct_decription(fld_elem, fld_dict)
+
+
+def construct_decription(elm: ET.Element, concept_dict: Dict) -> None:
+ """Collect doc from concept doc.
+ """
+ desc_text = ''
+ for child_elm in elm:
+ tag = remove_namespace_from_tag(child_elm.tag)
+ if tag == 'doc':
+ desc_text = child_elm.text
+ desc_text = ' '.join([x.strip() for x in desc_text.split('\n')])
+ break
+
+ concept_dict['description'] = desc_text
+
+
+def construct_group_structure(grp_elm: ET.Element, subsections: Dict) -> None:
+ """To construct group structure as follows:
+ :
+ section:
+ m_annotations:
+ eln:
+ overview: true
+
+ Parameters
+ ----------
+ elm : ET.Element
+ Group element
+ subsections : Dict
+ Dict to include group recursively
+ """
+
+ default_m_annot = {'m_annotations': {'eln': {'overview': True}}}
+
+ elm_attrib = grp_elm.attrib
+ grp_desig = ""
+ if 'name' in elm_attrib:
+ grp_desig = elm_attrib['name'].capitalize()
+ elif 'type' in elm_attrib:
+ grp_desig = elm_attrib['type'][2:].capitalize()
+
+ subsections[grp_desig] = {}
+ grp_dict = subsections[grp_desig]
+
+ # add setion in group
+ grp_dict['section'] = {}
+ section = grp_dict['section']
+ section.update(default_m_annot)
+
+ # pass the grp elment for recursive search
+ scan_xml_element_recursively(grp_elm, section)
+
+
+def _should_skip_iteration(elm: ET.Element) -> bool:
+ """Define some elements here that should be skipped.
+
+ Parameters
+ ----------
+ elm : ET.Element
+ The element to investigate to skip
+ """
+ attr = elm.attrib
+ elm_type = ''
+ if 'type' in attr:
+ elm_type = attr['type']
+ if elm_type in ['NXentry']:
+ return True
+ return False
+
+
+def scan_xml_element_recursively(nxdl_element: ET.Element,
+ recursive_dict: Dict,
+ root_name: str = "",
+ reader_name: str = '',
+ is_root: bool = False) -> None:
+ """Scan xml elements, and pass the element to the type of element handaler.
+
+ Parameters
+ ----------
+ nxdl_element : ET.Element
+ This xml element that will be scanned through the descendants.
+ recursive_dict : Dict
+ A dict that store hierarchical structure of scheme eln.
+ root_name : str, optional
+ Name of root that user want to see to name their application, e.g. MPES,
+ by default 'ROOT_NAME'
+ reader_name : Prefered name of the reader.
+ is_root : bool, optional
+ Declar the elment as root or not, by default False
+ """
+
+ if is_root:
+ # Note for later: crate a new function to handle root part
+ nxdl = 'NX.nxdl'
+ recursive_dict[root_name] = {'base_sections':
+ ['nomad.datamodel.metainfo.eln.NexusDataConverter',
+ 'nomad.datamodel.data.EntryData']}
+
+ m_annotations: Dict = {'m_annotations': {'template': {'reader': reader_name,
+ 'nxdl': nxdl},
+ 'eln': {'hide': []}}}
+
+ recursive_dict[root_name].update(m_annotations)
+
+ recursive_dict = recursive_dict[root_name]
+
+ # Define quantities for taking care of field
+ quantities: Dict = None
+ subsections: Dict = None
+ for elm in nxdl_element:
+ tag = remove_namespace_from_tag(elm.tag)
+ # To skip NXentry group but only consider the child elments
+ if _should_skip_iteration(elm):
+ scan_xml_element_recursively(elm, recursive_dict)
+ continue
+ if tag == 'field':
+ if quantities is None:
+ recursive_dict['quantities'] = {}
+ quantities = recursive_dict['quantities']
+ construct_field_structure(elm, quantities)
+ if tag == 'group':
+ if subsections is None:
+ recursive_dict['sub_sections'] = {}
+ subsections = recursive_dict['sub_sections']
+ construct_group_structure(elm, subsections)
+
+
+def get_eln_recursive_dict(recursive_dict: Dict, nexus_full_file: str) -> None:
+ """Develop a recursive dict that has hierarchical structure of scheme eln.
+
+ Parameters
+ ----------
+ recursive_dict : Dict
+ A dict that store hierarchical structure of scheme eln.
+ nexus_full_file : str
+ Full path of NeXus file e.g. /paNXmpes.nxdl.xml
+ """
+
+ nxdl_root = ET.parse(nexus_full_file).getroot()
+ root_name = nxdl_root.attrib['name'][2:] if 'name' in nxdl_root.attrib else ""
+ recursive_dict['definitions'] = {'name': '',
+ 'sections': {}}
+ sections = recursive_dict['definitions']['sections']
+
+ scan_xml_element_recursively(nxdl_root, sections,
+ root_name=root_name, is_root=True)
+
+
+def generate_scheme_eln(nexus_def: str, eln_file_name: str = None) -> None:
+ """Generate schema eln that should go to Nomad while running the reader.
+ The output file will be .scheme.archive.yaml
+
+ Parameters
+ ----------
+ nexus_def : str
+ Name of nexus definition e.g. NXmpes
+ eln_file_name : str
+ Name of output file e.g. mpes
+
+ Returns:
+ None
+ """
+
+ file_parts: list = []
+ out_file_ext = 'scheme.archive.yaml'
+ raw_name = ""
+ out_file = ""
+
+ nxdl_file = retrieve_nxdl_file(nexus_def)
+
+ if eln_file_name is None:
+ # raw_name from e.g. //NXmpes.nxdl.xml
+ raw_name = nxdl_file.split('/')[-1].split('.')[0][2:]
+ out_file = '.'.join([raw_name, out_file_ext])
+ else:
+ file_parts = eln_file_name.split('.')
+ if len(file_parts) == 1:
+ raw_name = file_parts[0]
+ out_file = '.'.join([raw_name, out_file_ext])
+ elif len(file_parts) == 4 and '.'.join(file_parts[1:]) == out_file_ext:
+ out_file = eln_file_name
+ elif nexus_def[0:2] == 'NX':
+ raw_name = nexus_def[2:]
+ out_file = '.'.join([raw_name, out_file_ext])
+ else:
+ raise ValueError("Check for correct NeXus definition and output file name.")
+
+ recursive_dict: Dict[str, Any] = {}
+ get_eln_recursive_dict(recursive_dict, nxdl_file)
+
+ with open(out_file, mode='w', encoding='utf-8') as out_f:
+ yaml.dump(recursive_dict, sort_keys=False, stream=out_f)
diff --git a/pynxtools/nexus/nexus.py b/pynxtools/nexus/nexus.py
index 9afa711fb..ef5f64cd5 100644
--- a/pynxtools/nexus/nexus.py
+++ b/pynxtools/nexus/nexus.py
@@ -258,8 +258,9 @@ def get_hdf_path(hdf_info):
return hdf_info['hdf_node'].name.split('/')[1:]
+# pylint: disable=too-many-arguments,too-many-locals
@lru_cache(maxsize=None)
-def get_inherited_hdf_nodes(nx_name: str = None, elem: ET.Element = None, # pylint: disable=too-many-arguments,too-many-locals
+def get_inherited_hdf_nodes(nx_name: str = None, elem: ET.Element = None,
hdf_node=None, hdf_path=None, hdf_root=None, attr=False):
"""Returns a list of ET.Element for the given path."""
# let us start with the given definition file
@@ -563,8 +564,11 @@ def hdf_node_to_self_concept_path(hdf_info, logger):
class HandleNexus:
"""documentation"""
+
+ # pylint: disable=too-many-instance-attributes
def __init__(self, logger, nexus_file,
- d_inq_nd=None, c_inq_nd=None):
+ d_inq_nd=None, c_inq_nd=None,
+ is_in_memory_file=False):
self.logger = logger
local_dir = os.path.abspath(os.path.dirname(__file__))
@@ -572,6 +576,7 @@ def __init__(self, logger, nexus_file,
os.path.join(local_dir, '../../tests/data/nexus/201805_WSe2_arpes.nxs')
self.parser = None
self.in_file = None
+ self.is_hdf5_file_obj = is_in_memory_file
self.d_inq_nd = d_inq_nd
self.c_inq_nd = c_inq_nd
# Aggregating hdf path corresponds to concept query node
@@ -638,19 +643,28 @@ def full_visit(self, root, hdf_node, name, func):
def process_nexus_master_file(self, parser):
"""Process a nexus master file by processing all its nodes and their attributes"""
self.parser = parser
- self.in_file = h5py.File(
- self.input_file_name[0]
- if isinstance(self.input_file_name, list)
- else self.input_file_name, 'r'
- )
- self.full_visit(self.in_file, self.in_file, '', self.visit_node)
- if self.d_inq_nd is None and self.c_inq_nd is None:
- get_default_plotable(self.in_file, self.logger)
- # To log the provided concept and concepts founded
- if self.c_inq_nd is not None:
- for hdf_path in self.hdf_path_list_for_c_inq_nd:
- self.logger.info(hdf_path)
- self.in_file.close()
+ try:
+ if not self.is_hdf5_file_obj:
+ self.in_file = h5py.File(
+ self.input_file_name[0]
+ if isinstance(self.input_file_name, list)
+ else self.input_file_name, 'r'
+ )
+ else:
+ self.in_file = self.input_file_name
+
+ self.full_visit(self.in_file, self.in_file, '', self.visit_node)
+
+ if self.d_inq_nd is None and self.c_inq_nd is None:
+ get_default_plotable(self.in_file, self.logger)
+ # To log the provided concept and concepts founded
+ if self.c_inq_nd is not None:
+ for hdf_path in self.hdf_path_list_for_c_inq_nd:
+ self.logger.info(hdf_path)
+ finally:
+ # To test if hdf_file is open print(self.in_file.id.valid)
+ self.in_file.close()
+ # To test if hdf_file is open print(self.in_file.id.valid)
@click.command()
diff --git a/pynxtools/nexus/nxdl_utils.py b/pynxtools/nexus/nxdl_utils.py
index 706390a7c..aa64d5caa 100644
--- a/pynxtools/nexus/nxdl_utils.py
+++ b/pynxtools/nexus/nxdl_utils.py
@@ -701,6 +701,9 @@ def get_node_at_nxdl_path(nxdl_path: str = None,
we are looking for or the root elem from a previously loaded NXDL file
and finds the corresponding XML element with the needed attributes."""
try:
+ if nxdl_path.count("/") == 1 and nxdl_path not in ("/ENTRY", "/entry"):
+ elem = None
+ nx_name = "NXroot"
(class_path, nxdlpath, elist) = get_inherited_nodes(nxdl_path, nx_name, elem)
except ValueError as value_error:
if exc:
diff --git a/pyproject.toml b/pyproject.toml
index 93917652e..d2c7853f0 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -1,6 +1,7 @@
[build-system]
requires = ["setuptools>=64.0.1", "setuptools-scm[toml]>=6.2"]
-build-backend = "setuptools.build_meta"
+backend-path = ["pynxtools"]
+build-backend = "_build_wrapper"
[project]
name = "pynxtools"
@@ -8,14 +9,15 @@ dynamic = ["version"]
authors = [
{ name = "The NOMAD Authors" },
]
-description = "Extend NeXus for materials science experiment and serve as a NOMAD parser implementation for NeXus."
+description = "Extend NeXus for experiments and characterization in Materials Science and Materials Engineering and serve as a NOMAD parser implementation for NeXus."
readme = "README.md"
-license = { file = "LICENSE.txt" }
-requires-python = ">=3.8,<3.11"
+license = { file = "LICENSE" }
+requires-python = ">=3.8,!=3.12"
classifiers = [
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
+ "Programming Language :: Python :: 3.11",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
]
@@ -29,20 +31,23 @@ dependencies = [
"ase>=3.19.0",
"flatdict>=4.0.1",
"hyperspy>=1.7.5",
- "ifes_apt_tc_data_modeling>=0.0.9",
+ "ifes_apt_tc_data_modeling>=0.1",
"gitpython>=3.1.24",
"pytz>=2021.1",
"kikuchipy>=0.9.0",
"pyxem>=0.15.1",
"zipfile37==0.1.3",
- "nionswift==0.16.8",
+ "nionswift>=0.16.8",
"tzlocal<=4.3",
"scipy>=1.7.1",
"lark>=1.1.5",
"requests",
"requests_cache",
+ "mergedeep"
]
+[options]
+install_requires = "importlib-metadata ; python_version < '3.10'"
[project.urls]
"Homepage" = "https://github.com/FAIRmat-NFDI/pynxtools"
"Bug Tracker" = "https://github.com/FAIRmat-NFDI/pynxtools/issues"
@@ -66,6 +71,7 @@ dev = [
read_nexus = "pynxtools.nexus.nexus:main"
dataconverter = "pynxtools.dataconverter.convert:convert_cli"
nyaml2nxdl = "pynxtools.nyaml2nxdl.nyaml2nxdl:launch_tool"
+generate_eln = "pynxtools.eln_mapper.eln_mapper:get_eln"
[tool.setuptools.package-data]
pynxtools = ["definitions/**/*.xml", "definitions/**/*.xsd"]
@@ -77,5 +83,5 @@ pynxtools = ["definitions/**/*.xml", "definitions/**/*.xsd"]
exclude = ["pynxtools/definitions*"]
[tool.setuptools_scm]
-version_scheme = "guess-next-dev"
+version_scheme = "no-guess-dev"
local_scheme = "node-and-date"
diff --git a/tests/data/dataconverter/NXtest.nxdl.xml b/tests/data/dataconverter/NXtest.nxdl.xml
index a2cc553fa..f4aa0aab4 100644
--- a/tests/data/dataconverter/NXtest.nxdl.xml
+++ b/tests/data/dataconverter/NXtest.nxdl.xml
@@ -60,6 +60,9 @@
A dummy entry to test optional parent check for required child.
+
+ This is a required group in an optional group.
+
diff --git a/tests/data/dataconverter/readers/apm/nomad_oasis_eln_schema_for_nx_apm/nxapm.schema.archive.yaml b/tests/data/dataconverter/readers/apm/nomad_oasis_eln_schema_for_nx_apm/nxapm.schema.archive.yaml
index a750d3a80..ba4a00b3b 100644
--- a/tests/data/dataconverter/readers/apm/nomad_oasis_eln_schema_for_nx_apm/nxapm.schema.archive.yaml
+++ b/tests/data/dataconverter/readers/apm/nomad_oasis_eln_schema_for_nx_apm/nxapm.schema.archive.yaml
@@ -18,8 +18,7 @@ definitions:
# This would be useful to make the default values set in `template` fixed.
# Leave the hide key even if you want to pass an empty list like in this example.
eln:
- # hide: ['nxdl', 'reader']
- hide: []
+ hide: ['nxdl', 'reader']
sub_sections:
entry:
section:
@@ -29,24 +28,6 @@ definitions:
eln:
overview: true
quantities:
- attr_version:
- type:
- type_kind: Enum
- type_data:
- - 'nexus-fairmat-proposal successor of 9636feecb79bb32b828b1a9804269573256d7696'
- description: Hashvalue of the NeXus application definition file
- m_annotations:
- eln:
- component: RadioEnumEditQuantity
- definition:
- type:
- type_kind: Enum
- type_data:
- - NXapm
- description: NeXus NXDL schema to which this file conforms
- m_annotations:
- eln:
- component: RadioEnumEditQuantity
experiment_identifier:
type: str
description: GUID of the experiment
@@ -58,40 +39,31 @@ definitions:
description: Free text details about the experiment
m_annotations:
eln:
- component: StringEditQuantity
+ component: RichTextEditQuantity
start_time:
type: Datetime
- description: ISO 8601 time code with local time zone offset to UTC when the experiment started.
+ description: |
+ ISO 8601 time code with local time zone offset
+ to UTC when the experiment started.
m_annotations:
eln:
component: DateTimeEditQuantity
end_time:
type: Datetime
- description: ISO 8601 time code with local time zone offset to UTC when the experiment ended.
+ description: |
+ ISO 8601 time code with local time zone offset
+ to UTC when the experiment ended.
m_annotations:
eln:
component: DateTimeEditQuantity
- program:
- type: str
- description: Name of the program used to create this file.
- m_annotations:
- eln:
- component: StringEditQuantity
- program__attr_version:
- type: str
- description: Version plus build number, commit hash, or description of the program to support reproducibility.
- m_annotations:
- eln:
- component: StringEditQuantity
run_number:
type: str
- description: Identifier in the instrument control software given for this experiment.
+ description: |
+ Identifier in the instrument control software
+ given for this experiment.
m_annotations:
eln:
component: StringEditQuantity
- # experiment_documentation(NXnote):
- # thumbnail(NXnote):
- # attr_type:
operation_mode:
type:
type_kind: Enum
@@ -124,6 +96,173 @@ definitions:
# m_annotations:
# eln:
# component: FileEditQuantity
+
+ sample:
+ section:
+ description: |
+ Description of the sample from which the specimen was prepared or
+ site-specifically cut out using e.g. a focused-ion beam instrument.
+ m_annotations:
+ eln:
+ quantities:
+ composition:
+ type: str
+ shape: ['*']
+ description: |
+ Chemical composition of the sample. The composition from e.g.
+ a composition table can be added as individual strings.
+ One string for each element with statements separated via a
+ single space. The string is expected to have the following format:
+ Symbol value unit +- stdev
+
+ An example: B 1. +- 0.2, means
+ composition of boron 1. at.-% +- 0.2 at.%.
+ If a string contains only a symbol this is interpreted
+ that the symbol specifies the matrix or remainder element
+ for the composition table.
+
+ If unit is omitted or named % this is interpreted as at.-%.
+ Unit can be at% or wt% but all strings have to use either atom
+ or weight percent but no mixtures.
+ No unit for stdev should be repeated as it has to be the
+ same unit as is used for the composition value.
+ m_annotations:
+ eln:
+ component: StringEditQuantity
+ grain_diameter:
+ type: np.float64
+ unit: micrometer
+ description: |
+ Qualitative information about the grain size, here specifically
+ described as the equivalent spherical diameter of an assumed
+ average grain size for the crystal ensemble.
+ m_annotations:
+ eln:
+ component: NumberEditQuantity
+ minValue: 0.0
+ defaultDisplayUnit: micrometer
+ grain_diameter_error:
+ type: np.float64
+ unit: micrometer
+ description: |
+ Magnitude of the standard deviation to the grain_diameter.
+ m_annotations:
+ eln:
+ component: NumberEditQuantity
+ minValue: 0.0
+ defaultDisplayUnit: micrometer
+ heat_treatment_temperature:
+ type: np.float64
+ unit: kelvin
+ description: |
+ The temperature of the last heat treatment step before quenching.
+ m_annotations:
+ eln:
+ component: NumberEditQuantity
+ minValue: 0.0
+ defaultDisplayUnit: kelvin
+ heat_treatment_temperature_error:
+ type: np.float64
+ unit: kelvin
+ description: |
+ Magnitude of the standard deviation of the heat_treatment_temperature.
+ m_annotations:
+ eln:
+ component: NumberEditQuantity
+ minValue: 0.0
+ defaultDisplayUnit: kelvin
+ heat_treatment_quenching_rate:
+ type: np.float64
+ unit: kelvin/second
+ description: |
+ Rate of the last quenching step.
+ m_annotations:
+ eln:
+ component: NumberEditQuantity
+ minValue: 0.0
+ defaultDisplayUnit: kelvin/second
+ heat_treatment_quenching_rate_error:
+ type: np.float64
+ unit: K/s
+ description: |
+ Magnitude of the standard deviation of the heat_treatment_quenching_rate.
+ m_annotations:
+ eln:
+ component: NumberEditQuantity
+ minValue: 0.0
+ defaultDisplayUnit: K/s
+ specimen:
+ section:
+ description: |
+ Details about the specimen and its immediate environment.
+ m_annotations:
+ eln:
+ quantities:
+ name:
+ type: str
+ description: |
+ GUID which distinguishes the specimen from all others and especially
+ the predecessor/origin from where the specimen was cut.
+ In cases where the specimen was e.g. site-specifically cut from
+ samples or in cases of an instrument session during which multiple
+ specimens are loaded, the name has to be descriptive enough to
+ resolve which specimen on e.g. the microtip array was taken.
+ This field must not be used for an alias of the specimen.
+ Instead, use short_title.
+ m_annotations:
+ eln:
+ component: StringEditQuantity
+ # sample_history:
+ # type: str
+ # description: |
+ # Reference to the location of or a GUID providing as many details
+ # as possible of the material, its microstructure, and its
+ # thermo-chemo-mechanical processing/preparation history.
+ # m_annotations:
+ # eln:
+ # component: StringEditQuantity
+ preparation_date:
+ type: Datetime
+ description: |
+ ISO 8601 time code with local time zone offset to UTC
+ when the measured specimen surface was prepared last time.
+ m_annotations:
+ eln:
+ component: DateTimeEditQuantity
+ is_polycrystalline:
+ type: bool
+ description: |
+ Is the specimen, i.e. the tip, polycrystalline, i.e. does
+ it includes a grain or phase boundary?
+ m_annotations:
+ eln:
+ component: BoolEditQuantity
+ alias:
+ type: str
+ description: |
+ Possibility to give an abbreviation of the specimen name field.
+ m_annotations:
+ eln:
+ component: StringEditQuantity
+ # atom_types should be a list of strings
+ # atom_types:
+ # type: str
+ # shape: ['*']
+ # description: |
+ # Use Hill's system for listing elements of the periodic table which
+ # are inside or attached to the surface of the specimen and thus
+ # relevant from a scientific point of view.
+ # m_annotations:
+ # eln:
+ # component: StringEditQuantity
+ description:
+ type: str
+ description: |
+ Discouraged free text field to be used in the case when properly
+ designed records for the sample_history are not available.
+ m_annotations:
+ eln:
+ component: RichTextEditQuantity
user:
repeats: true
section:
@@ -193,102 +332,6 @@ definitions:
m_annotations:
eln:
component: StringEditQuantity
- specimen:
- section:
- description: |
- Details about the specimen and its immediate environment.
- m_annotations:
- eln:
- quantities:
- name:
- type: str
- description: |
- GUID which distinguishes the specimen from all others and especially
- the predecessor/origin from where the specimen was cut.
- In cases where the specimen was e.g. site-specifically cut from
- samples or in cases of an instrument session during which multiple
- specimens are loaded, the name has to be descriptive enough to
- resolve which specimen on e.g. the microtip array was taken.
- This field must not be used for an alias of the specimen.
- Instead, use short_title.
- m_annotations:
- eln:
- component: StringEditQuantity
- sample_history:
- type: str
- description: |
- Reference to the location of or a GUID providing as many details
- as possible of the material, its microstructure, and its
- thermo-chemo-mechanical processing/preparation history.
- m_annotations:
- eln:
- component: StringEditQuantity
- preparation_date:
- type: Datetime
- description: |
- ISO 8601 time code with local time zone offset to UTC information when
- the measured specimen surface was actively prepared.
- m_annotations:
- eln:
- component: DateTimeEditQuantity
- short_title:
- type: str
- description: Possibility to give an abbreviation of the specimen name field.
- m_annotations:
- eln:
- component: StringEditQuantity
- # atom_types should be a list of strings
- atom_types:
- type: str
- shape: ['*']
- description: |
- Use Hill's system for listing elements of the periodic table which
- are inside or attached to the surface of the specimen and thus
- relevant from a scientific point of view.
- m_annotations:
- eln:
- component: StringEditQuantity
- description:
- type: str
- description: |
- Discouraged free text field to be used in the case when properly
- designed records for the sample_history are not available.
- m_annotations:
- eln:
- component: StringEditQuantity
- # composition_element_symbol:
- # type: str
- # shape: ['*']
- # description: |
- # Chemical symbol.
- # m_annotations:
- # eln:
- # component: StringEditQuantity
- # composition_mass_fraction:
- # type: np.float64
- # shape: ['*']
- # description: |
- # Composition but this can be atomic or mass fraction.
- # Best is you specify which you want. Under the hood oasis uses pint
- # /nomad/nomad/units is the place where you can predefine exotic
- # constants and units for a local oasis instance
- # m_annotations:
- # eln:
- # component: NumberEditQuantity
- # minValue: 0.
- # maxValue: 1.
- # composition_mass_fraction_error:
- # type: np.float64
- # shape: ['*']
- # description: |
- # Composition but this can be atomic or mass fraction.
- # Also here best to be specific. If people write at.-% but mean wt.-% you
- # cannot guard yourself against this
- # m_annotations:
- # eln:
- # component: NumberEditQuantity
- # minValue: 0.
- # maxValue: 1.
atom_probe:
section:
description: |
@@ -302,6 +345,7 @@ definitions:
type_data:
- success
- failure
+ - unknown
description: |
A statement whether the measurement was
successful or failed prematurely.
@@ -314,6 +358,14 @@ definitions:
m_annotations:
eln:
component: StringEditQuantity
+ location:
+ type: str
+ description: |
+ Location of the lab or place where the instrument is installed.
+ Using GEOREF is preferred.
+ m_annotations:
+ eln:
+ component: StringEditQuantity
# (NXfabrication):
flight_path_length:
type: np.float64
@@ -327,6 +379,18 @@ definitions:
defaultDisplayUnit: meter
minValue: 0.0
maxValue: 10.0
+ field_of_view:
+ type: np.float64
+ unit: nanometer
+ description: |
+ The nominal diameter of the specimen ROI which is measured in the
+ experiment. Physically, the specimen cannot be measured completely
+ because ions may launch but not become detected or hit elsewhere.
+ m_annotations:
+ eln:
+ component: NumberEditQuantity
+ defaultDisplayUnit: nanometer
+ minValue: 0.0
fabrication_vendor:
type: str
description: Name of the manufacturer/company, i.e. AMETEK/Cameca.
@@ -415,7 +479,7 @@ definitions:
component: NumberEditQuantity
defaultDisplayUnit: kelvin
minValue: 0.0
- maxValue: 273.15
+ maxValue: 300.0
analysis_chamber_pressure:
type: np.float64
unit: torr
@@ -485,8 +549,8 @@ definitions:
type_kind: Enum
type_data:
- laser
- - high_voltage
- - laser_and_high_voltage
+ - voltage
+ - laser_and_voltage
description: |
Which pulsing mode was used?
m_annotations:
@@ -510,41 +574,53 @@ definitions:
component: NumberEditQuantity
minValue: 0.0
maxValue: 1.0
- laser_source_name:
- type: str
- description: Given name/alias.
- m_annotations:
- eln:
- component: StringEditQuantity
- laser_source_wavelength:
- type: np.float64
- unit: meter
- description: Nominal wavelength of the laser radiation.
- m_annotations:
- eln:
- component: NumberEditQuantity
- defaultDisplayUnit: nanometer
- minValue: 0.0
- laser_source_power:
- type: np.float64
- unit: watt
- description: |
- Nominal power of the laser source while
- illuminating the specimen.
- m_annotations:
- eln:
- component: NumberEditQuantity
- defaultDisplayUnit: nanowatt
- minValue: 0.0
- laser_source_pulse_energy:
- type: np.float64
- unit: joule
- description: Average energy of the laser at peak of each pulse.
- m_annotations:
- eln:
- component: NumberEditQuantity
- defaultDisplayUnit: picojoule
- minValue: 0.0
+ # LEAP 6000 instrument has up to two lasers
+ sub_sections:
+ laser_source:
+ repeats: True
+ section:
+ description: |
+ Details about each laser pulsing unit.
+ LEAP6000 instruments can use up to two lasers.
+ m_annotations:
+ eln:
+ quantities:
+ name:
+ type: str
+ description: Given name/alias.
+ m_annotations:
+ eln:
+ component: StringEditQuantity
+ wavelength:
+ type: np.float64
+ unit: nanometer
+ description: Nominal wavelength of the laser radiation.
+ m_annotations:
+ eln:
+ component: NumberEditQuantity
+ defaultDisplayUnit: nanometer
+ minValue: 0.0
+ power:
+ type: np.float64
+ unit: nanowatt
+ description: |
+ Nominal power of the laser source while
+ illuminating the specimen.
+ m_annotations:
+ eln:
+ component: NumberEditQuantity
+ defaultDisplayUnit: nanowatt
+ minValue: 0.0
+ pulse_energy:
+ type: np.float64
+ unit: picojoule
+ description: |
+ Average energy of the laser at peak of each pulse.
+ m_annotations:
+ eln:
+ component: NumberEditQuantity
+ defaultDisplayUnit: picojoule
+ minValue: 0.0
# control_software:
# section:
# description: Which control software was used e.g. IVAS/APSuite
diff --git a/tests/data/dataconverter/readers/ellips/eln_data.yaml b/tests/data/dataconverter/readers/ellips/eln_data.yaml
index 70b708ef3..785e8e1e6 100644
--- a/tests/data/dataconverter/readers/ellips/eln_data.yaml
+++ b/tests/data/dataconverter/readers/ellips/eln_data.yaml
@@ -58,9 +58,6 @@ colnames:
- Delta
- err.Psi
- err.Delta
-definition: NXellipsometry
-definition/@url: https://github.com/FAIRmat-NFDI/nexus_definitions/blob/fairmat/contributed_definitions/NXellipsometry.nxdl.xml
-definition/@version: 0.0.2
derived_parameter_type: depolarization
experiment_description: RC2 scan on 2nm SiO2 on Si in air
experiment_identifier: exp-ID
diff --git a/tests/data/dataconverter/readers/json_map/data.json b/tests/data/dataconverter/readers/json_map/data.json
index 28fb71b48..ae0cf6c88 100644
--- a/tests/data/dataconverter/readers/json_map/data.json
+++ b/tests/data/dataconverter/readers/json_map/data.json
@@ -17,5 +17,6 @@
"type": "2nd type",
"date_value": "2022-01-22T12:14:12.05018+00:00",
"required_child": 1,
- "optional_child": 1
+ "optional_child": 1,
+ "random_data": [0, 1]
}
\ No newline at end of file
diff --git a/tests/data/dataconverter/readers/json_map/data.mapping.json b/tests/data/dataconverter/readers/json_map/data.mapping.json
index 5fc7b95c5..055b0977e 100644
--- a/tests/data/dataconverter/readers/json_map/data.mapping.json
+++ b/tests/data/dataconverter/readers/json_map/data.mapping.json
@@ -18,5 +18,6 @@
"/ENTRY[entry]/optional_parent/required_child": "/required_child",
"/ENTRY[entry]/program_name": "Example for listing exact data in the map file: Nexus Parser",
"/ENTRY[entry]/required_group/description": "An example description",
- "/ENTRY[entry]/required_group2/description": "An example description"
+ "/ENTRY[entry]/required_group2/description": "An example description",
+ "/ENTRY[entry]/optional_parent/req_group_in_opt_group/DATA[data]": "/random_data"
}
\ No newline at end of file
diff --git a/tests/data/dataconverter/readers/mpes/Ref_nexus_mpes.log b/tests/data/dataconverter/readers/mpes/Ref_nexus_mpes.log
index 35c7fb42f..d4a58e2ee 100644
--- a/tests/data/dataconverter/readers/mpes/Ref_nexus_mpes.log
+++ b/tests/data/dataconverter/readers/mpes/Ref_nexus_mpes.log
@@ -8,12 +8,13 @@ DEBUG - documentation (NXmpes.nxdl.xml:/ENTRY):
DEBUG -
DEBUG - documentation (NXentry.nxdl.xml:):
DEBUG -
- (**required**) :ref:`NXentry` describes the measurement.
-
- The top-level NeXus group which contains all the data and associated
- information that comprise a single measurement.
- It is mandatory that there is at least one
- group of this type in the NeXus file.
+ (**required**) :ref:`NXentry` describes the measurement.
+
+ The top-level NeXus group which contains all the data and associated
+ information that comprise a single measurement.
+ It is mandatory that there is at least one
+ group of this type in the NeXus file.
+
DEBUG - ===== ATTRS (//entry@NX_class)
DEBUG - value: NXentry
DEBUG - classpath: ['NXentry']
@@ -32,23 +33,23 @@ DEBUG - NXmpes.nxdl.xml:/ENTRY@default - [NX_CHAR]
DEBUG - NXentry.nxdl.xml:@default - [NX_CHAR]
DEBUG - documentation (NXentry.nxdl.xml:/default):
DEBUG -
- .. index:: find the default plottable data
- .. index:: plotting
- .. index:: default attribute value
-
- Declares which :ref:`NXdata` group contains the data
- to be shown by default.
- It is used to resolve ambiguity when
- one :ref:`NXdata` group exists.
- The value :ref:`names ` a child group. If that group
- itself has a ``default`` attribute, continue this chain until an
- :ref:`NXdata` group is reached.
-
- For more information about how NeXus identifies the default
- plottable data, see the
- :ref:`Find Plottable Data, v3 `
- section.
-
+ .. index:: find the default plottable data
+ .. index:: plotting
+ .. index:: default attribute value
+
+ Declares which :ref:`NXdata` group contains the data
+ to be shown by default.
+ It is used to resolve ambiguity when
+ one :ref:`NXdata` group exists.
+ The value :ref:`names ` a child group. If that group
+ itself has a ``default`` attribute, continue this chain until an
+ :ref:`NXdata` group is reached.
+
+ For more information about how NeXus identifies the default
+ plottable data, see the
+ :ref:`Find Plottable Data, v3 `
+ section.
+
DEBUG - ===== FIELD (//entry/collection_time):
DEBUG - value: 2317.343
DEBUG - classpath: ['NXentry', 'NX_FLOAT']
@@ -57,9 +58,9 @@ NXentry.nxdl.xml:/collection_time
DEBUG - <>
DEBUG - documentation (NXentry.nxdl.xml:/collection_time):
DEBUG -
- Time transpired actually collecting data i.e. taking out time when collection was
- suspended due to e.g. temperature out of range
-
+ Time transpired actually collecting data i.e. taking out time when collection was
+ suspended due to e.g. temperature out of range
+
DEBUG - ===== ATTRS (//entry/collection_time@units)
DEBUG - value: s
DEBUG - classpath: ['NXentry', 'NX_FLOAT']
@@ -77,34 +78,33 @@ DEBUG - documentation (NXmpes.nxdl.xml:/ENTRY/DATA):
DEBUG -
DEBUG - documentation (NXentry.nxdl.xml:/DATA):
DEBUG -
- The data group
-
- .. note:: Before the NIAC2016 meeting [#]_, at least one
- :ref:`NXdata` group was required in each :ref:`NXentry` group.
- At the NIAC2016 meeting, it was decided to make :ref:`NXdata`
- an optional group in :ref:`NXentry` groups for data files that
- do not use an application definition.
- It is recommended strongly that all NeXus data files provide
- a NXdata group.
- It is permissable to omit the NXdata group only when
- defining the default plot is not practical or possible
- from the available data.
-
- For example, neutron event data may not have anything that
- makes a useful plot without extensive processing.
-
- Certain application definitions override this decision and
- require an :ref:`NXdata` group
- in the :ref:`NXentry` group. The ``minOccurs=0`` attribute
- in the application definition will indicate the
- :ref:`NXdata` group
- is optional, otherwise, it is required.
-
- .. [#] NIAC2016:
- https://www.nexusformat.org/NIAC2016.html,
- https://github.com/nexusformat/NIAC/issues/16
-
-
+ The data group
+
+ .. note:: Before the NIAC2016 meeting [#]_, at least one
+ :ref:`NXdata` group was required in each :ref:`NXentry` group.
+ At the NIAC2016 meeting, it was decided to make :ref:`NXdata`
+ an optional group in :ref:`NXentry` groups for data files that
+ do not use an application definition.
+ It is recommended strongly that all NeXus data files provide
+ a NXdata group.
+ It is permissable to omit the NXdata group only when
+ defining the default plot is not practical or possible
+ from the available data.
+
+ For example, neutron event data may not have anything that
+ makes a useful plot without extensive processing.
+
+ Certain application definitions override this decision and
+ require an :ref:`NXdata` group
+ in the :ref:`NXentry` group. The ``minOccurs=0`` attribute
+ in the application definition will indicate the
+ :ref:`NXdata` group
+ is optional, otherwise, it is required.
+
+ .. [#] NIAC2016:
+ https://www.nexusformat.org/NIAC2016.html,
+ https://github.com/nexusformat/NIAC/issues/16
+
DEBUG - documentation (NXdata.nxdl.xml:):
DEBUG -
:ref:`NXdata` describes the plottable data and related dimension scales.
@@ -466,21 +466,21 @@ DEBUG - documentation (NXmpes.nxdl.xml:/ENTRY/definition):
DEBUG -
DEBUG - documentation (NXentry.nxdl.xml:/definition):
DEBUG -
- (alternate use: see same field in :ref:`NXsubentry` for preferred)
-
- Official NeXus NXDL schema to which this entry conforms which must be
- the name of the NXDL file (case sensitive without the file extension)
- that the NXDL schema is defined in.
-
- For example the ``definition`` field for a file that conformed to the
- *NXarpes.nxdl.xml* definition must contain the string **NXarpes**.
-
- This field is provided so that :ref:`NXentry` can be the overlay position
- in a NeXus data file for an application definition and its
- set of groups, fields, and attributes.
-
- *It is advised* to use :ref:`NXsubentry`, instead, as the overlay position.
-
+ (alternate use: see same field in :ref:`NXsubentry` for preferred)
+
+ Official NeXus NXDL schema to which this entry conforms which must be
+ the name of the NXDL file (case sensitive without the file extension)
+ that the NXDL schema is defined in.
+
+ For example the ``definition`` field for a file that conformed to the
+ *NXarpes.nxdl.xml* definition must contain the string **NXarpes**.
+
+ This field is provided so that :ref:`NXentry` can be the overlay position
+ in a NeXus data file for an application definition and its
+ set of groups, fields, and attributes.
+
+ *It is advised* to use :ref:`NXsubentry`, instead, as the overlay position.
+
DEBUG - ===== ATTRS (//entry/definition@version)
DEBUG - value: None
DEBUG - classpath: ['NXentry', 'NX_CHAR']
@@ -493,7 +493,9 @@ DEBUG - documentation (NXmpes.nxdl.xml:/ENTRY/definition/version):
DEBUG -
DEBUG - NXentry.nxdl.xml:/definition@version - [NX_CHAR]
DEBUG - documentation (NXentry.nxdl.xml:/definition/version):
-DEBUG - NXDL version number
+DEBUG -
+ NXDL version number
+
DEBUG - ===== FIELD (//entry/duration):
DEBUG - value: 2317
DEBUG - classpath: ['NXentry', 'NX_INT']
@@ -501,7 +503,9 @@ DEBUG - classes:
NXentry.nxdl.xml:/duration
DEBUG - <>
DEBUG - documentation (NXentry.nxdl.xml:/duration):
-DEBUG - Duration of measurement
+DEBUG -
+ Duration of measurement
+
DEBUG - ===== ATTRS (//entry/duration@units)
DEBUG - value: s
DEBUG - classpath: ['NXentry', 'NX_INT']
@@ -515,7 +519,9 @@ DEBUG - classes:
NXentry.nxdl.xml:/end_time
DEBUG - <>
DEBUG - documentation (NXentry.nxdl.xml:/end_time):
-DEBUG - Ending time of measurement
+DEBUG -
+ Ending time of measurement
+
DEBUG - ===== FIELD (//entry/entry_identifier):
DEBUG - value: b'2019/2019_05/2019_05_23/Scan005'
DEBUG - classpath: ['NXentry', 'NX_CHAR']
@@ -523,22 +529,39 @@ DEBUG - classes:
NXentry.nxdl.xml:/entry_identifier
DEBUG - <>
DEBUG - documentation (NXentry.nxdl.xml:/entry_identifier):
-DEBUG - unique identifier for the measurement, defined by the facility.
+DEBUG -
+ unique identifier for the measurement, defined by the facility.
+
DEBUG - ===== FIELD (//entry/experiment_facility):
DEBUG - value: b'Time Resolved ARPES'
-DEBUG - classpath: ['NXentry']
-DEBUG - NOT IN SCHEMA
+DEBUG - classpath: ['NXentry', 'NX_CHAR']
+DEBUG - classes:
+NXentry.nxdl.xml:/experiment_facility
+DEBUG - <>
+DEBUG - documentation (NXentry.nxdl.xml:/experiment_facility):
DEBUG -
+ Name of the experimental facility
+
DEBUG - ===== FIELD (//entry/experiment_institution):
DEBUG - value: b'Fritz Haber Institute - Max Planck Society'
-DEBUG - classpath: ['NXentry']
-DEBUG - NOT IN SCHEMA
+DEBUG - classpath: ['NXentry', 'NX_CHAR']
+DEBUG - classes:
+NXentry.nxdl.xml:/experiment_institution
+DEBUG - <>
+DEBUG - documentation (NXentry.nxdl.xml:/experiment_institution):
DEBUG -
+ Name of the institution hosting the facility
+
DEBUG - ===== FIELD (//entry/experiment_laboratory):
DEBUG - value: b'Clean Room 4'
-DEBUG - classpath: ['NXentry']
-DEBUG - NOT IN SCHEMA
+DEBUG - classpath: ['NXentry', 'NX_CHAR']
+DEBUG - classes:
+NXentry.nxdl.xml:/experiment_laboratory
+DEBUG - <>
+DEBUG - documentation (NXentry.nxdl.xml:/experiment_laboratory):
DEBUG -
+ Name of the laboratory or beamline
+
DEBUG - ===== GROUP (//entry/instrument [NXmpes::/NXentry/NXinstrument]):
DEBUG - classpath: ['NXentry', 'NXinstrument']
DEBUG - classes:
@@ -552,15 +575,15 @@ DEBUG - documentation (NXentry.nxdl.xml:/INSTRUMENT):
DEBUG -
DEBUG - documentation (NXinstrument.nxdl.xml:):
DEBUG -
- Collection of the components of the instrument or beamline.
-
- Template of instrument descriptions comprising various beamline components.
- Each component will also be a NeXus group defined by its distance from the
- sample. Negative distances represent beamline components that are before the
- sample while positive distances represent components that are after the sample.
- This device allows the unique identification of beamline components in a way
- that is valid for both reactor and pulsed instrumentation.
-
+ Collection of the components of the instrument or beamline.
+
+ Template of instrument descriptions comprising various beamline components.
+ Each component will also be a NeXus group defined by its distance from the
+ sample. Negative distances represent beamline components that are before the
+ sample while positive distances represent components that are after the sample.
+ This device allows the unique identification of beamline components in a way
+ that is valid for both reactor and pulsed instrumentation.
+
DEBUG - ===== ATTRS (//entry/instrument@NX_class)
DEBUG - value: NXinstrument
DEBUG - classpath: ['NXentry', 'NXinstrument']
@@ -583,22 +606,22 @@ DEBUG - documentation (NXinstrument.nxdl.xml:/BEAM):
DEBUG -
DEBUG - documentation (NXbeam.nxdl.xml:):
DEBUG -
- Properties of the neutron or X-ray beam at a given location.
-
- This group is intended to be referenced
- by beamline component groups within the :ref:`NXinstrument` group or by the :ref:`NXsample` group. This group is
- especially valuable in storing the results of instrument simulations in which it is useful
- to specify the beam profile, time distribution etc. at each beamline component. Otherwise,
- its most likely use is in the :ref:`NXsample` group in which it defines the results of the neutron
- scattering by the sample, e.g., energy transfer, polarizations. Finally, There are cases where the beam is
- considered as a beamline component and this group may be defined as a subgroup directly inside
- :ref:`NXinstrument`, in which case it is recommended that the position of the beam is specified by an
- :ref:`NXtransformations` group, unless the beam is at the origin (which is the sample).
-
- Note that incident_wavelength and related fields can be a scalar values or arrays, depending on the use case.
- To support these use cases, the explicit dimensionality of these fields is not specified, but it can be inferred
- by the presense of and shape of accompanying fields, such as incident_wavelength_weights for a polychromatic beam.
-
+ Properties of the neutron or X-ray beam at a given location.
+
+ This group is intended to be referenced
+ by beamline component groups within the :ref:`NXinstrument` group or by the :ref:`NXsample` group. This group is
+ especially valuable in storing the results of instrument simulations in which it is useful
+ to specify the beam profile, time distribution etc. at each beamline component. Otherwise,
+ its most likely use is in the :ref:`NXsample` group in which it defines the results of the neutron
+ scattering by the sample, e.g., energy transfer, polarizations. Finally, There are cases where the beam is
+ considered as a beamline component and this group may be defined as a subgroup directly inside
+ :ref:`NXinstrument`, in which case it is recommended that the position of the beam is specified by an
+ :ref:`NXtransformations` group, unless the beam is at the origin (which is the sample).
+
+ Note that incident_wavelength and related fields can be a scalar values or arrays, depending on the use case.
+ To support these use cases, the explicit dimensionality of these fields is not specified, but it can be inferred
+ by the presense of and shape of accompanying fields, such as incident_wavelength_weights for a polychromatic beam.
+
DEBUG - ===== ATTRS (//entry/instrument/beam@NX_class)
DEBUG - value: NXbeam
DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXbeam']
@@ -632,8 +655,8 @@ NXbeam.nxdl.xml:/extent
DEBUG - <>
DEBUG - documentation (NXbeam.nxdl.xml:/extent):
DEBUG -
- Size of the beam entering this component. Note this represents
- a rectangular beam aperture, and values represent FWHM
+ Size of the beam entering this component. Note this represents
+ a rectangular beam aperture, and values represent FWHM
DEBUG - ===== ATTRS (//entry/instrument/beam/extent@units)
DEBUG - value: µm
@@ -651,7 +674,24 @@ DEBUG - <>
DEBUG - documentation (NXmpes.nxdl.xml:/ENTRY/INSTRUMENT/BEAM/incident_energy):
DEBUG -
DEBUG - documentation (NXbeam.nxdl.xml:/incident_energy):
-DEBUG - Energy carried by each particle of the beam on entering the beamline component
+DEBUG -
+ Energy carried by each particle of the beam on entering the beamline component.
+
+ In the case of a monochromatic beam this is the scalar energy.
+ Several other use cases are permitted, depending on the
+ presence of other incident_energy_X fields.
+
+ * In the case of a polychromatic beam this is an array of length m of energies, with the relative weights in incident_energy_weights.
+ * In the case of a monochromatic beam that varies shot-to-shot, this is an array of energies, one for each recorded shot.
+ Here, incident_energy_weights and incident_energy_spread are not set.
+ * In the case of a polychromatic beam that varies shot-to-shot,
+ this is an array of length m with the relative weights in incident_energy_weights as a 2D array.
+ * In the case of a polychromatic beam that varies shot-to-shot and where the channels also vary,
+ this is a 2D array of dimensions nP by m (slow to fast) with the relative weights in incident_energy_weights as a 2D array.
+
+ Note, variants are a good way to represent several of these use cases in a single dataset,
+ e.g. if a calibrated, single-value energy value is available along with the original spectrum from which it was calibrated.
+
DEBUG - ===== ATTRS (//entry/instrument/beam/incident_energy@units)
DEBUG - value: eV
DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXbeam', 'NX_FLOAT']
@@ -665,15 +705,25 @@ DEBUG - value: 0.11
DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXbeam', 'NX_NUMBER']
DEBUG - classes:
NXmpes.nxdl.xml:/ENTRY/INSTRUMENT/BEAM/incident_energy_spread
+NXbeam.nxdl.xml:/incident_energy_spread
DEBUG - <>
DEBUG - documentation (NXmpes.nxdl.xml:/ENTRY/INSTRUMENT/BEAM/incident_energy_spread):
DEBUG -
+DEBUG - documentation (NXbeam.nxdl.xml:/incident_energy_spread):
+DEBUG -
+ The energy spread FWHM for the corresponding energy(ies) in incident_energy. In the case of shot-to-shot variation in
+ the energy spread, this is a 2D array of dimension nP by m
+ (slow to fast) of the spreads of the corresponding
+ wavelength in incident_wavelength.
+
DEBUG - ===== ATTRS (//entry/instrument/beam/incident_energy_spread@units)
DEBUG - value: eV
DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXbeam', 'NX_NUMBER']
DEBUG - classes:
NXmpes.nxdl.xml:/ENTRY/INSTRUMENT/BEAM/incident_energy_spread
+NXbeam.nxdl.xml:/incident_energy_spread
DEBUG - NXmpes.nxdl.xml:/ENTRY/INSTRUMENT/BEAM/incident_energy_spread@units [NX_ENERGY]
+DEBUG - NXbeam.nxdl.xml:/incident_energy_spread@units [NX_ENERGY]
DEBUG - ===== FIELD (//entry/instrument/beam/incident_polarization):
DEBUG - value: [1. 1. 0. 0.]
DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXbeam', 'NX_NUMBER']
@@ -684,7 +734,10 @@ DEBUG - <>
DEBUG - documentation (NXmpes.nxdl.xml:/ENTRY/INSTRUMENT/BEAM/incident_polarization):
DEBUG -
DEBUG - documentation (NXbeam.nxdl.xml:/incident_polarization):
-DEBUG - Polarization vector on entering beamline component
+DEBUG -
+ Incident polarization as a Stokes vector
+ on entering beamline component
+
DEBUG - ===== ATTRS (//entry/instrument/beam/incident_polarization@units)
DEBUG - value: V^2/mm^2
DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXbeam', 'NX_NUMBER']
@@ -695,14 +748,20 @@ DEBUG - NXmpes.nxdl.xml:/ENTRY/INSTRUMENT/BEAM/incident_polarization@units [NX_A
DEBUG - NXbeam.nxdl.xml:/incident_polarization@units [NX_ANY]
DEBUG - ===== FIELD (//entry/instrument/beam/pulse_duration):
DEBUG - value: 20.0
-DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXbeam']
-DEBUG - NOT IN SCHEMA
+DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXbeam', 'NX_FLOAT']
+DEBUG - classes:
+NXbeam.nxdl.xml:/pulse_duration
+DEBUG - <>
+DEBUG - documentation (NXbeam.nxdl.xml:/pulse_duration):
DEBUG -
+ FWHM duration of the pulses at the diagnostic point
+
DEBUG - ===== ATTRS (//entry/instrument/beam/pulse_duration@units)
DEBUG - value: fs
-DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXbeam']
-DEBUG - NOT IN SCHEMA
-DEBUG -
+DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXbeam', 'NX_FLOAT']
+DEBUG - classes:
+NXbeam.nxdl.xml:/pulse_duration
+DEBUG - NXbeam.nxdl.xml:/pulse_duration@units [NX_TIME]
DEBUG - ===== GROUP (//entry/instrument/beam_pump [NXmpes::/NXentry/NXinstrument/NXbeam]):
DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXbeam']
DEBUG - classes:
@@ -716,22 +775,22 @@ DEBUG - documentation (NXinstrument.nxdl.xml:/BEAM):
DEBUG -
DEBUG - documentation (NXbeam.nxdl.xml:):
DEBUG -
- Properties of the neutron or X-ray beam at a given location.
-
- This group is intended to be referenced
- by beamline component groups within the :ref:`NXinstrument` group or by the :ref:`NXsample` group. This group is
- especially valuable in storing the results of instrument simulations in which it is useful
- to specify the beam profile, time distribution etc. at each beamline component. Otherwise,
- its most likely use is in the :ref:`NXsample` group in which it defines the results of the neutron
- scattering by the sample, e.g., energy transfer, polarizations. Finally, There are cases where the beam is
- considered as a beamline component and this group may be defined as a subgroup directly inside
- :ref:`NXinstrument`, in which case it is recommended that the position of the beam is specified by an
- :ref:`NXtransformations` group, unless the beam is at the origin (which is the sample).
-
- Note that incident_wavelength and related fields can be a scalar values or arrays, depending on the use case.
- To support these use cases, the explicit dimensionality of these fields is not specified, but it can be inferred
- by the presense of and shape of accompanying fields, such as incident_wavelength_weights for a polychromatic beam.
-
+ Properties of the neutron or X-ray beam at a given location.
+
+ This group is intended to be referenced
+ by beamline component groups within the :ref:`NXinstrument` group or by the :ref:`NXsample` group. This group is
+ especially valuable in storing the results of instrument simulations in which it is useful
+ to specify the beam profile, time distribution etc. at each beamline component. Otherwise,
+ its most likely use is in the :ref:`NXsample` group in which it defines the results of the neutron
+ scattering by the sample, e.g., energy transfer, polarizations. Finally, There are cases where the beam is
+ considered as a beamline component and this group may be defined as a subgroup directly inside
+ :ref:`NXinstrument`, in which case it is recommended that the position of the beam is specified by an
+ :ref:`NXtransformations` group, unless the beam is at the origin (which is the sample).
+
+ Note that incident_wavelength and related fields can be a scalar values or arrays, depending on the use case.
+ To support these use cases, the explicit dimensionality of these fields is not specified, but it can be inferred
+ by the presense of and shape of accompanying fields, such as incident_wavelength_weights for a polychromatic beam.
+
DEBUG - ===== ATTRS (//entry/instrument/beam_pump@NX_class)
DEBUG - value: NXbeam
DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXbeam']
@@ -743,14 +802,20 @@ DEBUG - @NX_class [NX_CHAR]
DEBUG -
DEBUG - ===== FIELD (//entry/instrument/beam_pump/average_power):
DEBUG - value: 444.0
-DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXbeam']
-DEBUG - NOT IN SCHEMA
+DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXbeam', 'NX_FLOAT']
+DEBUG - classes:
+NXbeam.nxdl.xml:/average_power
+DEBUG - <>
+DEBUG - documentation (NXbeam.nxdl.xml:/average_power):
DEBUG -
+ Average power at the diagnostic point
+
DEBUG - ===== ATTRS (//entry/instrument/beam_pump/average_power@units)
DEBUG - value: mW
-DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXbeam']
-DEBUG - NOT IN SCHEMA
-DEBUG -
+DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXbeam', 'NX_FLOAT']
+DEBUG - classes:
+NXbeam.nxdl.xml:/average_power
+DEBUG - NXbeam.nxdl.xml:/average_power@units [NX_POWER]
DEBUG - ===== FIELD (//entry/instrument/beam_pump/distance):
DEBUG - value: 0.0
DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXbeam', 'NX_NUMBER']
@@ -775,8 +840,8 @@ NXbeam.nxdl.xml:/extent
DEBUG - <>
DEBUG - documentation (NXbeam.nxdl.xml:/extent):
DEBUG -
- Size of the beam entering this component. Note this represents
- a rectangular beam aperture, and values represent FWHM
+ Size of the beam entering this component. Note this represents
+ a rectangular beam aperture, and values represent FWHM
DEBUG - ===== ATTRS (//entry/instrument/beam_pump/extent@units)
DEBUG - value: µm
@@ -786,14 +851,20 @@ NXbeam.nxdl.xml:/extent
DEBUG - NXbeam.nxdl.xml:/extent@units [NX_LENGTH]
DEBUG - ===== FIELD (//entry/instrument/beam_pump/fluence):
DEBUG - value: 1.3
-DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXbeam']
-DEBUG - NOT IN SCHEMA
+DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXbeam', 'NX_FLOAT']
+DEBUG - classes:
+NXbeam.nxdl.xml:/fluence
+DEBUG - <>
+DEBUG - documentation (NXbeam.nxdl.xml:/fluence):
DEBUG -
+ Incident fluence at the diagnostic point
+
DEBUG - ===== ATTRS (//entry/instrument/beam_pump/fluence@units)
DEBUG - value: mJ/cm^2
-DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXbeam']
-DEBUG - NOT IN SCHEMA
-DEBUG -
+DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXbeam', 'NX_FLOAT']
+DEBUG - classes:
+NXbeam.nxdl.xml:/fluence
+DEBUG - NXbeam.nxdl.xml:/fluence@units [NX_ANY]
DEBUG - ===== FIELD (//entry/instrument/beam_pump/incident_energy):
DEBUG - value: 1.2
DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXbeam', 'NX_FLOAT']
@@ -804,7 +875,24 @@ DEBUG - <>
DEBUG - documentation (NXmpes.nxdl.xml:/ENTRY/INSTRUMENT/BEAM/incident_energy):
DEBUG -
DEBUG - documentation (NXbeam.nxdl.xml:/incident_energy):
-DEBUG - Energy carried by each particle of the beam on entering the beamline component
+DEBUG -
+ Energy carried by each particle of the beam on entering the beamline component.
+
+ In the case of a monochromatic beam this is the scalar energy.
+ Several other use cases are permitted, depending on the
+ presence of other incident_energy_X fields.
+
+ * In the case of a polychromatic beam this is an array of length m of energies, with the relative weights in incident_energy_weights.
+ * In the case of a monochromatic beam that varies shot-to-shot, this is an array of energies, one for each recorded shot.
+ Here, incident_energy_weights and incident_energy_spread are not set.
+ * In the case of a polychromatic beam that varies shot-to-shot,
+ this is an array of length m with the relative weights in incident_energy_weights as a 2D array.
+ * In the case of a polychromatic beam that varies shot-to-shot and where the channels also vary,
+ this is a 2D array of dimensions nP by m (slow to fast) with the relative weights in incident_energy_weights as a 2D array.
+
+ Note, variants are a good way to represent several of these use cases in a single dataset,
+ e.g. if a calibrated, single-value energy value is available along with the original spectrum from which it was calibrated.
+
DEBUG - ===== ATTRS (//entry/instrument/beam_pump/incident_energy@units)
DEBUG - value: eV
DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXbeam', 'NX_FLOAT']
@@ -818,15 +906,25 @@ DEBUG - value: 0.05
DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXbeam', 'NX_NUMBER']
DEBUG - classes:
NXmpes.nxdl.xml:/ENTRY/INSTRUMENT/BEAM/incident_energy_spread
+NXbeam.nxdl.xml:/incident_energy_spread
DEBUG - <>
DEBUG - documentation (NXmpes.nxdl.xml:/ENTRY/INSTRUMENT/BEAM/incident_energy_spread):
DEBUG -
+DEBUG - documentation (NXbeam.nxdl.xml:/incident_energy_spread):
+DEBUG -
+ The energy spread FWHM for the corresponding energy(ies) in incident_energy. In the case of shot-to-shot variation in
+ the energy spread, this is a 2D array of dimension nP by m
+ (slow to fast) of the spreads of the corresponding
+ wavelength in incident_wavelength.
+
DEBUG - ===== ATTRS (//entry/instrument/beam_pump/incident_energy_spread@units)
DEBUG - value: eV
DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXbeam', 'NX_NUMBER']
DEBUG - classes:
NXmpes.nxdl.xml:/ENTRY/INSTRUMENT/BEAM/incident_energy_spread
+NXbeam.nxdl.xml:/incident_energy_spread
DEBUG - NXmpes.nxdl.xml:/ENTRY/INSTRUMENT/BEAM/incident_energy_spread@units [NX_ENERGY]
+DEBUG - NXbeam.nxdl.xml:/incident_energy_spread@units [NX_ENERGY]
DEBUG - ===== FIELD (//entry/instrument/beam_pump/incident_polarization):
DEBUG - value: [1 1 0 0]
DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXbeam', 'NX_NUMBER']
@@ -837,7 +935,10 @@ DEBUG - <>
DEBUG - documentation (NXmpes.nxdl.xml:/ENTRY/INSTRUMENT/BEAM/incident_polarization):
DEBUG -
DEBUG - documentation (NXbeam.nxdl.xml:/incident_polarization):
-DEBUG - Polarization vector on entering beamline component
+DEBUG -
+ Incident polarization as a Stokes vector
+ on entering beamline component
+
DEBUG - ===== ATTRS (//entry/instrument/beam_pump/incident_polarization@units)
DEBUG - value: V^2/mm^2
DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXbeam', 'NX_NUMBER']
@@ -854,38 +955,38 @@ NXbeam.nxdl.xml:/incident_wavelength
DEBUG - <>
DEBUG - documentation (NXbeam.nxdl.xml:/incident_wavelength):
DEBUG -
- In the case of a monochromatic beam this is the scalar
- wavelength.
-
- Several other use cases are permitted, depending on the
- presence or absence of other incident_wavelength_X
- fields.
-
- In the case of a polychromatic beam this is an array of
- length **m** of wavelengths, with the relative weights
- in ``incident_wavelength_weights``.
-
- In the case of a monochromatic beam that varies shot-
- to-shot, this is an array of wavelengths, one for each
- recorded shot. Here, ``incident_wavelength_weights`` and
- incident_wavelength_spread are not set.
-
- In the case of a polychromatic beam that varies shot-to-
- shot, this is an array of length **m** with the relative
- weights in ``incident_wavelength_weights`` as a 2D array.
-
- In the case of a polychromatic beam that varies shot-to-
- shot and where the channels also vary, this is a 2D array
- of dimensions **nP** by **m** (slow to fast) with the
- relative weights in ``incident_wavelength_weights`` as a 2D
- array.
-
- Note, :ref:`variants ` are a good way
- to represent several of these use cases in a single dataset,
- e.g. if a calibrated, single-value wavelength value is
- available along with the original spectrum from which it
- was calibrated.
- Wavelength on entering beamline component
+ In the case of a monochromatic beam this is the scalar
+ wavelength.
+
+ Several other use cases are permitted, depending on the
+ presence or absence of other incident_wavelength_X
+ fields.
+
+ In the case of a polychromatic beam this is an array of
+ length **m** of wavelengths, with the relative weights
+ in ``incident_wavelength_weights``.
+
+ In the case of a monochromatic beam that varies shot-
+ to-shot, this is an array of wavelengths, one for each
+ recorded shot. Here, ``incident_wavelength_weights`` and
+ incident_wavelength_spread are not set.
+
+ In the case of a polychromatic beam that varies shot-to-
+ shot, this is an array of length **m** with the relative
+ weights in ``incident_wavelength_weights`` as a 2D array.
+
+ In the case of a polychromatic beam that varies shot-to-
+ shot and where the channels also vary, this is a 2D array
+ of dimensions **nP** by **m** (slow to fast) with the
+ relative weights in ``incident_wavelength_weights`` as a 2D
+ array.
+
+ Note, :ref:`variants ` are a good way
+ to represent several of these use cases in a single dataset,
+ e.g. if a calibrated, single-value wavelength value is
+ available along with the original spectrum from which it
+ was calibrated.
+ Wavelength on entering beamline component
DEBUG - ===== ATTRS (//entry/instrument/beam_pump/incident_wavelength@units)
DEBUG - value: nm
@@ -895,24 +996,36 @@ NXbeam.nxdl.xml:/incident_wavelength
DEBUG - NXbeam.nxdl.xml:/incident_wavelength@units [NX_WAVELENGTH]
DEBUG - ===== FIELD (//entry/instrument/beam_pump/pulse_duration):
DEBUG - value: 140.0
-DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXbeam']
-DEBUG - NOT IN SCHEMA
+DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXbeam', 'NX_FLOAT']
+DEBUG - classes:
+NXbeam.nxdl.xml:/pulse_duration
+DEBUG - <>
+DEBUG - documentation (NXbeam.nxdl.xml:/pulse_duration):
DEBUG -
+ FWHM duration of the pulses at the diagnostic point
+
DEBUG - ===== ATTRS (//entry/instrument/beam_pump/pulse_duration@units)
DEBUG - value: fs
-DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXbeam']
-DEBUG - NOT IN SCHEMA
-DEBUG -
+DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXbeam', 'NX_FLOAT']
+DEBUG - classes:
+NXbeam.nxdl.xml:/pulse_duration
+DEBUG - NXbeam.nxdl.xml:/pulse_duration@units [NX_TIME]
DEBUG - ===== FIELD (//entry/instrument/beam_pump/pulse_energy):
DEBUG - value: 0.889
-DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXbeam']
-DEBUG - NOT IN SCHEMA
+DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXbeam', 'NX_FLOAT']
+DEBUG - classes:
+NXbeam.nxdl.xml:/pulse_energy
+DEBUG - <>
+DEBUG - documentation (NXbeam.nxdl.xml:/pulse_energy):
DEBUG -
+ Energy of a single pulse at the diagnostic point
+
DEBUG - ===== ATTRS (//entry/instrument/beam_pump/pulse_energy@units)
DEBUG - value: µJ
-DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXbeam']
-DEBUG - NOT IN SCHEMA
-DEBUG -
+DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXbeam', 'NX_FLOAT']
+DEBUG - classes:
+NXbeam.nxdl.xml:/pulse_energy
+DEBUG - NXbeam.nxdl.xml:/pulse_energy@units [NX_ENERGY]
DEBUG - ===== GROUP (//entry/instrument/electronanalyser [NXmpes::/NXentry/NXinstrument/NXelectronanalyser]):
DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXelectronanalyser']
DEBUG - classes:
@@ -948,8 +1061,8 @@ DEBUG -
DEBUG - documentation (NXcollectioncolumn.nxdl.xml:):
DEBUG -
- Subclass of NXelectronanalyser to describe the electron collection column of a
- photoelectron analyser.
+ Subclass of NXelectronanalyser to describe the electron collection
+ column of a photoelectron analyser.
DEBUG - ===== ATTRS (//entry/instrument/electronanalyser/collectioncolumn@NX_class)
DEBUG - value: NXcollectioncolumn
@@ -978,7 +1091,9 @@ DEBUG -
or contrast aperture
DEBUG - documentation (NXaperture.nxdl.xml:):
-DEBUG - A beamline aperture. This group is deprecated, use NXslit instead.
+DEBUG -
+ A beamline aperture. This group is deprecated, use NXslit instead.
+
DEBUG - ===== ATTRS (//entry/instrument/electronanalyser/collectioncolumn/contrast_aperture@NX_class)
DEBUG - value: NXaperture
DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXelectronanalyser', 'NXcollectioncolumn', 'NXaperture']
@@ -988,35 +1103,80 @@ NXcollectioncolumn.nxdl.xml:/APERTURE
NXaperture.nxdl.xml:
DEBUG - @NX_class [NX_CHAR]
DEBUG -
-DEBUG - ===== GROUP (//entry/instrument/electronanalyser/collectioncolumn/contrast_aperture/ca_m3 [NXmpes::/NXentry/NXinstrument/NXelectronanalyser/NXcollectioncolumn/NXaperture/ca_m3]):
-DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXelectronanalyser', 'NXcollectioncolumn', 'NXaperture']
-DEBUG - NOT IN SCHEMA
+DEBUG - ===== GROUP (//entry/instrument/electronanalyser/collectioncolumn/contrast_aperture/ca_m3 [NXmpes::/NXentry/NXinstrument/NXelectronanalyser/NXcollectioncolumn/NXaperture/NXpositioner]):
+DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXelectronanalyser', 'NXcollectioncolumn', 'NXaperture', 'NXpositioner']
+DEBUG - classes:
+NXaperture.nxdl.xml:/POSITIONER
+NXpositioner.nxdl.xml:
+DEBUG - <>
+DEBUG - documentation (NXaperture.nxdl.xml:/POSITIONER):
+DEBUG -
+ Stores the raw positions of aperture motors.
+
+DEBUG - documentation (NXpositioner.nxdl.xml:):
+DEBUG -
+ A generic positioner such as a motor or piezo-electric transducer.
+
+DEBUG - ===== ATTRS (//entry/instrument/electronanalyser/collectioncolumn/contrast_aperture/ca_m3@NX_class)
+DEBUG - value: NXpositioner
+DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXelectronanalyser', 'NXcollectioncolumn', 'NXaperture', 'NXpositioner']
+DEBUG - classes:
+NXaperture.nxdl.xml:/POSITIONER
+NXpositioner.nxdl.xml:
+DEBUG - @NX_class [NX_CHAR]
DEBUG -
DEBUG - ===== FIELD (//entry/instrument/electronanalyser/collectioncolumn/contrast_aperture/ca_m3/value):
DEBUG - value: -11.49979350759219
-DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXelectronanalyser', 'NXcollectioncolumn', 'NXaperture']
-DEBUG - NOT IN SCHEMA
-DEBUG -
+DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXelectronanalyser', 'NXcollectioncolumn', 'NXaperture', 'NXpositioner', 'NX_NUMBER']
+DEBUG - classes:
+NXpositioner.nxdl.xml:/value
+DEBUG - <>
+DEBUG - documentation (NXpositioner.nxdl.xml:/value):
+DEBUG - best known value of positioner - need [n] as may be scanned
DEBUG - ===== ATTRS (//entry/instrument/electronanalyser/collectioncolumn/contrast_aperture/ca_m3/value@units)
DEBUG - value: mm
-DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXelectronanalyser', 'NXcollectioncolumn', 'NXaperture']
-DEBUG - NOT IN SCHEMA
-DEBUG -
+DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXelectronanalyser', 'NXcollectioncolumn', 'NXaperture', 'NXpositioner', 'NX_NUMBER']
+DEBUG - classes:
+NXpositioner.nxdl.xml:/value
+DEBUG - NXpositioner.nxdl.xml:/value@units [NX_ANY]
DEBUG - ===== FIELD (//entry/instrument/electronanalyser/collectioncolumn/contrast_aperture/shape):
DEBUG - value: b'open'
-DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXelectronanalyser', 'NXcollectioncolumn', 'NXaperture']
-DEBUG - NOT IN SCHEMA
-DEBUG -
+DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXelectronanalyser', 'NXcollectioncolumn', 'NXaperture', 'NX_CHAR']
+DEBUG - classes:
+NXaperture.nxdl.xml:/shape
+DEBUG - <>
+DEBUG - enumeration (NXaperture.nxdl.xml:/shape):
+DEBUG - -> straight slit
+DEBUG - -> curved slit
+DEBUG - -> pinhole
+DEBUG - -> circle
+DEBUG - -> square
+DEBUG - -> hexagon
+DEBUG - -> octagon
+DEBUG - -> bladed
+DEBUG - -> open
+DEBUG - -> grid
+DEBUG - documentation (NXaperture.nxdl.xml:/shape):
+DEBUG -
+ Shape of the aperture.
+
DEBUG - ===== FIELD (//entry/instrument/electronanalyser/collectioncolumn/contrast_aperture/size):
DEBUG - value: nan
-DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXelectronanalyser', 'NXcollectioncolumn', 'NXaperture']
-DEBUG - NOT IN SCHEMA
+DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXelectronanalyser', 'NXcollectioncolumn', 'NXaperture', 'NX_NUMBER']
+DEBUG - classes:
+NXaperture.nxdl.xml:/size
+DEBUG - <>
+DEBUG - documentation (NXaperture.nxdl.xml:/size):
DEBUG -
+ The relevant dimension for the aperture, i.e. slit width, pinhole and iris
+ diameter
+
DEBUG - ===== ATTRS (//entry/instrument/electronanalyser/collectioncolumn/contrast_aperture/size@units)
DEBUG - value: µm
-DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXelectronanalyser', 'NXcollectioncolumn', 'NXaperture']
-DEBUG - NOT IN SCHEMA
-DEBUG -
+DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXelectronanalyser', 'NXcollectioncolumn', 'NXaperture', 'NX_NUMBER']
+DEBUG - classes:
+NXaperture.nxdl.xml:/size
+DEBUG - NXaperture.nxdl.xml:/size@units [NX_LENGTH]
DEBUG - ===== FIELD (//entry/instrument/electronanalyser/collectioncolumn/extractor_current):
DEBUG - value: -0.1309711275510204
DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXelectronanalyser', 'NXcollectioncolumn', 'NX_FLOAT']
@@ -1068,7 +1228,9 @@ DEBUG -
or contrast aperture
DEBUG - documentation (NXaperture.nxdl.xml:):
-DEBUG - A beamline aperture. This group is deprecated, use NXslit instead.
+DEBUG -
+ A beamline aperture. This group is deprecated, use NXslit instead.
+
DEBUG - ===== ATTRS (//entry/instrument/electronanalyser/collectioncolumn/field_aperture@NX_class)
DEBUG - value: NXaperture
DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXelectronanalyser', 'NXcollectioncolumn', 'NXaperture']
@@ -1078,49 +1240,116 @@ NXcollectioncolumn.nxdl.xml:/APERTURE
NXaperture.nxdl.xml:
DEBUG - @NX_class [NX_CHAR]
DEBUG -
-DEBUG - ===== GROUP (//entry/instrument/electronanalyser/collectioncolumn/field_aperture/fa_m1 [NXmpes::/NXentry/NXinstrument/NXelectronanalyser/NXcollectioncolumn/NXaperture/fa_m1]):
-DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXelectronanalyser', 'NXcollectioncolumn', 'NXaperture']
-DEBUG - NOT IN SCHEMA
+DEBUG - ===== GROUP (//entry/instrument/electronanalyser/collectioncolumn/field_aperture/fa_m1 [NXmpes::/NXentry/NXinstrument/NXelectronanalyser/NXcollectioncolumn/NXaperture/NXpositioner]):
+DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXelectronanalyser', 'NXcollectioncolumn', 'NXaperture', 'NXpositioner']
+DEBUG - classes:
+NXaperture.nxdl.xml:/POSITIONER
+NXpositioner.nxdl.xml:
+DEBUG - <>
+DEBUG - documentation (NXaperture.nxdl.xml:/POSITIONER):
+DEBUG -
+ Stores the raw positions of aperture motors.
+
+DEBUG - documentation (NXpositioner.nxdl.xml:):
+DEBUG -
+ A generic positioner such as a motor or piezo-electric transducer.
+
+DEBUG - ===== ATTRS (//entry/instrument/electronanalyser/collectioncolumn/field_aperture/fa_m1@NX_class)
+DEBUG - value: NXpositioner
+DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXelectronanalyser', 'NXcollectioncolumn', 'NXaperture', 'NXpositioner']
+DEBUG - classes:
+NXaperture.nxdl.xml:/POSITIONER
+NXpositioner.nxdl.xml:
+DEBUG - @NX_class [NX_CHAR]
DEBUG -
DEBUG - ===== FIELD (//entry/instrument/electronanalyser/collectioncolumn/field_aperture/fa_m1/value):
DEBUG - value: 3.749874153422982
-DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXelectronanalyser', 'NXcollectioncolumn', 'NXaperture']
-DEBUG - NOT IN SCHEMA
-DEBUG -
+DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXelectronanalyser', 'NXcollectioncolumn', 'NXaperture', 'NXpositioner', 'NX_NUMBER']
+DEBUG - classes:
+NXpositioner.nxdl.xml:/value
+DEBUG - <>
+DEBUG - documentation (NXpositioner.nxdl.xml:/value):
+DEBUG - best known value of positioner - need [n] as may be scanned
DEBUG - ===== ATTRS (//entry/instrument/electronanalyser/collectioncolumn/field_aperture/fa_m1/value@units)
DEBUG - value: mm
-DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXelectronanalyser', 'NXcollectioncolumn', 'NXaperture']
-DEBUG - NOT IN SCHEMA
+DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXelectronanalyser', 'NXcollectioncolumn', 'NXaperture', 'NXpositioner', 'NX_NUMBER']
+DEBUG - classes:
+NXpositioner.nxdl.xml:/value
+DEBUG - NXpositioner.nxdl.xml:/value@units [NX_ANY]
+DEBUG - ===== GROUP (//entry/instrument/electronanalyser/collectioncolumn/field_aperture/fa_m2 [NXmpes::/NXentry/NXinstrument/NXelectronanalyser/NXcollectioncolumn/NXaperture/NXpositioner]):
+DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXelectronanalyser', 'NXcollectioncolumn', 'NXaperture', 'NXpositioner']
+DEBUG - classes:
+NXaperture.nxdl.xml:/POSITIONER
+NXpositioner.nxdl.xml:
+DEBUG - <>
+DEBUG - documentation (NXaperture.nxdl.xml:/POSITIONER):
DEBUG -
-DEBUG - ===== GROUP (//entry/instrument/electronanalyser/collectioncolumn/field_aperture/fa_m2 [NXmpes::/NXentry/NXinstrument/NXelectronanalyser/NXcollectioncolumn/NXaperture/fa_m2]):
-DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXelectronanalyser', 'NXcollectioncolumn', 'NXaperture']
-DEBUG - NOT IN SCHEMA
+ Stores the raw positions of aperture motors.
+
+DEBUG - documentation (NXpositioner.nxdl.xml:):
+DEBUG -
+ A generic positioner such as a motor or piezo-electric transducer.
+
+DEBUG - ===== ATTRS (//entry/instrument/electronanalyser/collectioncolumn/field_aperture/fa_m2@NX_class)
+DEBUG - value: NXpositioner
+DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXelectronanalyser', 'NXcollectioncolumn', 'NXaperture', 'NXpositioner']
+DEBUG - classes:
+NXaperture.nxdl.xml:/POSITIONER
+NXpositioner.nxdl.xml:
+DEBUG - @NX_class [NX_CHAR]
DEBUG -
DEBUG - ===== FIELD (//entry/instrument/electronanalyser/collectioncolumn/field_aperture/fa_m2/value):
DEBUG - value: -5.200156936301793
-DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXelectronanalyser', 'NXcollectioncolumn', 'NXaperture']
-DEBUG - NOT IN SCHEMA
-DEBUG -
+DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXelectronanalyser', 'NXcollectioncolumn', 'NXaperture', 'NXpositioner', 'NX_NUMBER']
+DEBUG - classes:
+NXpositioner.nxdl.xml:/value
+DEBUG - <>
+DEBUG - documentation (NXpositioner.nxdl.xml:/value):
+DEBUG - best known value of positioner - need [n] as may be scanned
DEBUG - ===== ATTRS (//entry/instrument/electronanalyser/collectioncolumn/field_aperture/fa_m2/value@units)
DEBUG - value: mm
-DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXelectronanalyser', 'NXcollectioncolumn', 'NXaperture']
-DEBUG - NOT IN SCHEMA
-DEBUG -
+DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXelectronanalyser', 'NXcollectioncolumn', 'NXaperture', 'NXpositioner', 'NX_NUMBER']
+DEBUG - classes:
+NXpositioner.nxdl.xml:/value
+DEBUG - NXpositioner.nxdl.xml:/value@units [NX_ANY]
DEBUG - ===== FIELD (//entry/instrument/electronanalyser/collectioncolumn/field_aperture/shape):
DEBUG - value: b'circle'
-DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXelectronanalyser', 'NXcollectioncolumn', 'NXaperture']
-DEBUG - NOT IN SCHEMA
-DEBUG -
+DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXelectronanalyser', 'NXcollectioncolumn', 'NXaperture', 'NX_CHAR']
+DEBUG - classes:
+NXaperture.nxdl.xml:/shape
+DEBUG - <>
+DEBUG - enumeration (NXaperture.nxdl.xml:/shape):
+DEBUG - -> straight slit
+DEBUG - -> curved slit
+DEBUG - -> pinhole
+DEBUG - -> circle
+DEBUG - -> square
+DEBUG - -> hexagon
+DEBUG - -> octagon
+DEBUG - -> bladed
+DEBUG - -> open
+DEBUG - -> grid
+DEBUG - documentation (NXaperture.nxdl.xml:/shape):
+DEBUG -
+ Shape of the aperture.
+
DEBUG - ===== FIELD (//entry/instrument/electronanalyser/collectioncolumn/field_aperture/size):
DEBUG - value: 200.0
-DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXelectronanalyser', 'NXcollectioncolumn', 'NXaperture']
-DEBUG - NOT IN SCHEMA
+DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXelectronanalyser', 'NXcollectioncolumn', 'NXaperture', 'NX_NUMBER']
+DEBUG - classes:
+NXaperture.nxdl.xml:/size
+DEBUG - <>
+DEBUG - documentation (NXaperture.nxdl.xml:/size):
DEBUG -
+ The relevant dimension for the aperture, i.e. slit width, pinhole and iris
+ diameter
+
DEBUG - ===== ATTRS (//entry/instrument/electronanalyser/collectioncolumn/field_aperture/size@units)
DEBUG - value: µm
-DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXelectronanalyser', 'NXcollectioncolumn', 'NXaperture']
-DEBUG - NOT IN SCHEMA
-DEBUG -
+DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXelectronanalyser', 'NXcollectioncolumn', 'NXaperture', 'NX_NUMBER']
+DEBUG - classes:
+NXaperture.nxdl.xml:/size
+DEBUG - NXaperture.nxdl.xml:/size@units [NX_LENGTH]
DEBUG - ===== GROUP (//entry/instrument/electronanalyser/collectioncolumn/lens_A [NXmpes::/NXentry/NXinstrument/NXelectronanalyser/NXcollectioncolumn/NXlens_em]):
DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXelectronanalyser', 'NXcollectioncolumn', 'NXlens_em']
DEBUG - classes:
@@ -1133,14 +1362,14 @@ DEBUG -
DEBUG - documentation (NXlens_em.nxdl.xml:):
DEBUG -
- Description of an electro-magnetic lens or a compound lens.
+ Base class for an electro-magnetic lens or a compound lens.
- For NXtransformations the origin of the coordinate system is placed
- in the center of the lens
- (its polepiece, pinhole, or another point of reference).
- The origin should be specified in the NXtransformations.
+ For :ref:`NXtransformations` the origin of the coordinate system is placed
+ in the center of the lens (its polepiece, pinhole, or another
+ point of reference). The origin should be specified in the :ref:`NXtransformations`.
- For details of electro-magnetic lenses in the literature see e.g. `L. Reimer `_
+ For details of electro-magnetic lenses in the literature
+ see e.g. `L. Reimer `_
DEBUG - ===== ATTRS (//entry/instrument/electronanalyser/collectioncolumn/lens_A@NX_class)
DEBUG - value: NXlens_em
@@ -1169,8 +1398,10 @@ NXlens_em.nxdl.xml:/voltage
DEBUG - <>
DEBUG - documentation (NXlens_em.nxdl.xml:/voltage):
DEBUG -
- Excitation voltage of the lens. For dipoles it is a single number. For higher
- orders, it is an array.
+ Excitation voltage of the lens.
+
+ For dipoles it is a single number.
+ For higher order multipoles, it is an array.
DEBUG - ===== ATTRS (//entry/instrument/electronanalyser/collectioncolumn/lens_A/voltage@units)
DEBUG - value: V
@@ -1190,14 +1421,14 @@ DEBUG -
DEBUG - documentation (NXlens_em.nxdl.xml:):
DEBUG -
- Description of an electro-magnetic lens or a compound lens.
+ Base class for an electro-magnetic lens or a compound lens.
- For NXtransformations the origin of the coordinate system is placed
- in the center of the lens
- (its polepiece, pinhole, or another point of reference).
- The origin should be specified in the NXtransformations.
+ For :ref:`NXtransformations` the origin of the coordinate system is placed
+ in the center of the lens (its polepiece, pinhole, or another
+ point of reference). The origin should be specified in the :ref:`NXtransformations`.
- For details of electro-magnetic lenses in the literature see e.g. `L. Reimer `_
+ For details of electro-magnetic lenses in the literature
+ see e.g. `L. Reimer `_
DEBUG - ===== ATTRS (//entry/instrument/electronanalyser/collectioncolumn/lens_B@NX_class)
DEBUG - value: NXlens_em
@@ -1226,8 +1457,10 @@ NXlens_em.nxdl.xml:/voltage
DEBUG - <>
DEBUG - documentation (NXlens_em.nxdl.xml:/voltage):
DEBUG -
- Excitation voltage of the lens. For dipoles it is a single number. For higher
- orders, it is an array.
+ Excitation voltage of the lens.
+
+ For dipoles it is a single number.
+ For higher order multipoles, it is an array.
DEBUG - ===== ATTRS (//entry/instrument/electronanalyser/collectioncolumn/lens_B/voltage@units)
DEBUG - value: V
@@ -1247,14 +1480,14 @@ DEBUG -
DEBUG - documentation (NXlens_em.nxdl.xml:):
DEBUG -
- Description of an electro-magnetic lens or a compound lens.
+ Base class for an electro-magnetic lens or a compound lens.
- For NXtransformations the origin of the coordinate system is placed
- in the center of the lens
- (its polepiece, pinhole, or another point of reference).
- The origin should be specified in the NXtransformations.
+ For :ref:`NXtransformations` the origin of the coordinate system is placed
+ in the center of the lens (its polepiece, pinhole, or another
+ point of reference). The origin should be specified in the :ref:`NXtransformations`.
- For details of electro-magnetic lenses in the literature see e.g. `L. Reimer `_
+ For details of electro-magnetic lenses in the literature
+ see e.g. `L. Reimer `_
DEBUG - ===== ATTRS (//entry/instrument/electronanalyser/collectioncolumn/lens_C@NX_class)
DEBUG - value: NXlens_em
@@ -1283,8 +1516,10 @@ NXlens_em.nxdl.xml:/voltage
DEBUG - <>
DEBUG - documentation (NXlens_em.nxdl.xml:/voltage):
DEBUG -
- Excitation voltage of the lens. For dipoles it is a single number. For higher
- orders, it is an array.
+ Excitation voltage of the lens.
+
+ For dipoles it is a single number.
+ For higher order multipoles, it is an array.
DEBUG - ===== ATTRS (//entry/instrument/electronanalyser/collectioncolumn/lens_C/voltage@units)
DEBUG - value: V
@@ -1304,14 +1539,14 @@ DEBUG -
DEBUG - documentation (NXlens_em.nxdl.xml:):
DEBUG -
- Description of an electro-magnetic lens or a compound lens.
+ Base class for an electro-magnetic lens or a compound lens.
- For NXtransformations the origin of the coordinate system is placed
- in the center of the lens
- (its polepiece, pinhole, or another point of reference).
- The origin should be specified in the NXtransformations.
+ For :ref:`NXtransformations` the origin of the coordinate system is placed
+ in the center of the lens (its polepiece, pinhole, or another
+ point of reference). The origin should be specified in the :ref:`NXtransformations`.
- For details of electro-magnetic lenses in the literature see e.g. `L. Reimer `_
+ For details of electro-magnetic lenses in the literature
+ see e.g. `L. Reimer `_
DEBUG - ===== ATTRS (//entry/instrument/electronanalyser/collectioncolumn/lens_D@NX_class)
DEBUG - value: NXlens_em
@@ -1340,8 +1575,10 @@ NXlens_em.nxdl.xml:/voltage
DEBUG - <>
DEBUG - documentation (NXlens_em.nxdl.xml:/voltage):
DEBUG -
- Excitation voltage of the lens. For dipoles it is a single number. For higher
- orders, it is an array.
+ Excitation voltage of the lens.
+
+ For dipoles it is a single number.
+ For higher order multipoles, it is an array.
DEBUG - ===== ATTRS (//entry/instrument/electronanalyser/collectioncolumn/lens_D/voltage@units)
DEBUG - value: V
@@ -1361,14 +1598,14 @@ DEBUG -
DEBUG - documentation (NXlens_em.nxdl.xml:):
DEBUG -
- Description of an electro-magnetic lens or a compound lens.
+ Base class for an electro-magnetic lens or a compound lens.
- For NXtransformations the origin of the coordinate system is placed
- in the center of the lens
- (its polepiece, pinhole, or another point of reference).
- The origin should be specified in the NXtransformations.
+ For :ref:`NXtransformations` the origin of the coordinate system is placed
+ in the center of the lens (its polepiece, pinhole, or another
+ point of reference). The origin should be specified in the :ref:`NXtransformations`.
- For details of electro-magnetic lenses in the literature see e.g. `L. Reimer `_
+ For details of electro-magnetic lenses in the literature
+ see e.g. `L. Reimer `_
DEBUG - ===== ATTRS (//entry/instrument/electronanalyser/collectioncolumn/lens_E@NX_class)
DEBUG - value: NXlens_em
@@ -1397,8 +1634,10 @@ NXlens_em.nxdl.xml:/voltage
DEBUG - <>
DEBUG - documentation (NXlens_em.nxdl.xml:/voltage):
DEBUG -
- Excitation voltage of the lens. For dipoles it is a single number. For higher
- orders, it is an array.
+ Excitation voltage of the lens.
+
+ For dipoles it is a single number.
+ For higher order multipoles, it is an array.
DEBUG - ===== ATTRS (//entry/instrument/electronanalyser/collectioncolumn/lens_E/voltage@units)
DEBUG - value: V
@@ -1418,14 +1657,14 @@ DEBUG -
DEBUG - documentation (NXlens_em.nxdl.xml:):
DEBUG -
- Description of an electro-magnetic lens or a compound lens.
+ Base class for an electro-magnetic lens or a compound lens.
- For NXtransformations the origin of the coordinate system is placed
- in the center of the lens
- (its polepiece, pinhole, or another point of reference).
- The origin should be specified in the NXtransformations.
+ For :ref:`NXtransformations` the origin of the coordinate system is placed
+ in the center of the lens (its polepiece, pinhole, or another
+ point of reference). The origin should be specified in the :ref:`NXtransformations`.
- For details of electro-magnetic lenses in the literature see e.g. `L. Reimer `_
+ For details of electro-magnetic lenses in the literature
+ see e.g. `L. Reimer `_
DEBUG - ===== ATTRS (//entry/instrument/electronanalyser/collectioncolumn/lens_F@NX_class)
DEBUG - value: NXlens_em
@@ -1454,8 +1693,10 @@ NXlens_em.nxdl.xml:/voltage
DEBUG - <>
DEBUG - documentation (NXlens_em.nxdl.xml:/voltage):
DEBUG -
- Excitation voltage of the lens. For dipoles it is a single number. For higher
- orders, it is an array.
+ Excitation voltage of the lens.
+
+ For dipoles it is a single number.
+ For higher order multipoles, it is an array.
DEBUG - ===== ATTRS (//entry/instrument/electronanalyser/collectioncolumn/lens_F/voltage@units)
DEBUG - value: V
@@ -1475,14 +1716,14 @@ DEBUG -
DEBUG - documentation (NXlens_em.nxdl.xml:):
DEBUG -
- Description of an electro-magnetic lens or a compound lens.
+ Base class for an electro-magnetic lens or a compound lens.
- For NXtransformations the origin of the coordinate system is placed
- in the center of the lens
- (its polepiece, pinhole, or another point of reference).
- The origin should be specified in the NXtransformations.
+ For :ref:`NXtransformations` the origin of the coordinate system is placed
+ in the center of the lens (its polepiece, pinhole, or another
+ point of reference). The origin should be specified in the :ref:`NXtransformations`.
- For details of electro-magnetic lenses in the literature see e.g. `L. Reimer `_
+ For details of electro-magnetic lenses in the literature
+ see e.g. `L. Reimer `_
DEBUG - ===== ATTRS (//entry/instrument/electronanalyser/collectioncolumn/lens_Foc@NX_class)
DEBUG - value: NXlens_em
@@ -1511,8 +1752,10 @@ NXlens_em.nxdl.xml:/voltage
DEBUG - <>
DEBUG - documentation (NXlens_em.nxdl.xml:/voltage):
DEBUG -
- Excitation voltage of the lens. For dipoles it is a single number. For higher
- orders, it is an array.
+ Excitation voltage of the lens.
+
+ For dipoles it is a single number.
+ For higher order multipoles, it is an array.
DEBUG - ===== ATTRS (//entry/instrument/electronanalyser/collectioncolumn/lens_Foc/voltage@units)
DEBUG - value: V
@@ -1532,14 +1775,14 @@ DEBUG -
DEBUG - documentation (NXlens_em.nxdl.xml:):
DEBUG -
- Description of an electro-magnetic lens or a compound lens.
+ Base class for an electro-magnetic lens or a compound lens.
- For NXtransformations the origin of the coordinate system is placed
- in the center of the lens
- (its polepiece, pinhole, or another point of reference).
- The origin should be specified in the NXtransformations.
+ For :ref:`NXtransformations` the origin of the coordinate system is placed
+ in the center of the lens (its polepiece, pinhole, or another
+ point of reference). The origin should be specified in the :ref:`NXtransformations`.
- For details of electro-magnetic lenses in the literature see e.g. `L. Reimer `_
+ For details of electro-magnetic lenses in the literature
+ see e.g. `L. Reimer `_
DEBUG - ===== ATTRS (//entry/instrument/electronanalyser/collectioncolumn/lens_G@NX_class)
DEBUG - value: NXlens_em
@@ -1568,8 +1811,10 @@ NXlens_em.nxdl.xml:/voltage
DEBUG - <>
DEBUG - documentation (NXlens_em.nxdl.xml:/voltage):
DEBUG -
- Excitation voltage of the lens. For dipoles it is a single number. For higher
- orders, it is an array.
+ Excitation voltage of the lens.
+
+ For dipoles it is a single number.
+ For higher order multipoles, it is an array.
DEBUG - ===== ATTRS (//entry/instrument/electronanalyser/collectioncolumn/lens_G/voltage@units)
DEBUG - value: V
@@ -1589,14 +1834,14 @@ DEBUG -
DEBUG - documentation (NXlens_em.nxdl.xml:):
DEBUG -
- Description of an electro-magnetic lens or a compound lens.
+ Base class for an electro-magnetic lens or a compound lens.
- For NXtransformations the origin of the coordinate system is placed
- in the center of the lens
- (its polepiece, pinhole, or another point of reference).
- The origin should be specified in the NXtransformations.
+ For :ref:`NXtransformations` the origin of the coordinate system is placed
+ in the center of the lens (its polepiece, pinhole, or another
+ point of reference). The origin should be specified in the :ref:`NXtransformations`.
- For details of electro-magnetic lenses in the literature see e.g. `L. Reimer `_
+ For details of electro-magnetic lenses in the literature
+ see e.g. `L. Reimer `_
DEBUG - ===== ATTRS (//entry/instrument/electronanalyser/collectioncolumn/lens_H@NX_class)
DEBUG - value: NXlens_em
@@ -1625,8 +1870,10 @@ NXlens_em.nxdl.xml:/voltage
DEBUG - <>
DEBUG - documentation (NXlens_em.nxdl.xml:/voltage):
DEBUG -
- Excitation voltage of the lens. For dipoles it is a single number. For higher
- orders, it is an array.
+ Excitation voltage of the lens.
+
+ For dipoles it is a single number.
+ For higher order multipoles, it is an array.
DEBUG - ===== ATTRS (//entry/instrument/electronanalyser/collectioncolumn/lens_H/voltage@units)
DEBUG - value: V
@@ -1646,14 +1893,14 @@ DEBUG -
DEBUG - documentation (NXlens_em.nxdl.xml:):
DEBUG -
- Description of an electro-magnetic lens or a compound lens.
+ Base class for an electro-magnetic lens or a compound lens.
- For NXtransformations the origin of the coordinate system is placed
- in the center of the lens
- (its polepiece, pinhole, or another point of reference).
- The origin should be specified in the NXtransformations.
+ For :ref:`NXtransformations` the origin of the coordinate system is placed
+ in the center of the lens (its polepiece, pinhole, or another
+ point of reference). The origin should be specified in the :ref:`NXtransformations`.
- For details of electro-magnetic lenses in the literature see e.g. `L. Reimer `_
+ For details of electro-magnetic lenses in the literature
+ see e.g. `L. Reimer `_
DEBUG - ===== ATTRS (//entry/instrument/electronanalyser/collectioncolumn/lens_I@NX_class)
DEBUG - value: NXlens_em
@@ -1682,8 +1929,10 @@ NXlens_em.nxdl.xml:/voltage
DEBUG - <>
DEBUG - documentation (NXlens_em.nxdl.xml:/voltage):
DEBUG -
- Excitation voltage of the lens. For dipoles it is a single number. For higher
- orders, it is an array.
+ Excitation voltage of the lens.
+
+ For dipoles it is a single number.
+ For higher order multipoles, it is an array.
DEBUG - ===== ATTRS (//entry/instrument/electronanalyser/collectioncolumn/lens_I/voltage@units)
DEBUG - value: V
@@ -1703,14 +1952,14 @@ DEBUG -
DEBUG - documentation (NXlens_em.nxdl.xml:):
DEBUG -
- Description of an electro-magnetic lens or a compound lens.
+ Base class for an electro-magnetic lens or a compound lens.
- For NXtransformations the origin of the coordinate system is placed
- in the center of the lens
- (its polepiece, pinhole, or another point of reference).
- The origin should be specified in the NXtransformations.
+ For :ref:`NXtransformations` the origin of the coordinate system is placed
+ in the center of the lens (its polepiece, pinhole, or another
+ point of reference). The origin should be specified in the :ref:`NXtransformations`.
- For details of electro-magnetic lenses in the literature see e.g. `L. Reimer `_
+ For details of electro-magnetic lenses in the literature
+ see e.g. `L. Reimer `_
DEBUG - ===== ATTRS (//entry/instrument/electronanalyser/collectioncolumn/lens_UCA@NX_class)
DEBUG - value: NXlens_em
@@ -1739,8 +1988,10 @@ NXlens_em.nxdl.xml:/voltage
DEBUG - <>
DEBUG - documentation (NXlens_em.nxdl.xml:/voltage):
DEBUG -
- Excitation voltage of the lens. For dipoles it is a single number. For higher
- orders, it is an array.
+ Excitation voltage of the lens.
+
+ For dipoles it is a single number.
+ For higher order multipoles, it is an array.
DEBUG - ===== ATTRS (//entry/instrument/electronanalyser/collectioncolumn/lens_UCA/voltage@units)
DEBUG - value: V
@@ -1760,14 +2011,14 @@ DEBUG -
DEBUG - documentation (NXlens_em.nxdl.xml:):
DEBUG -
- Description of an electro-magnetic lens or a compound lens.
+ Base class for an electro-magnetic lens or a compound lens.
- For NXtransformations the origin of the coordinate system is placed
- in the center of the lens
- (its polepiece, pinhole, or another point of reference).
- The origin should be specified in the NXtransformations.
+ For :ref:`NXtransformations` the origin of the coordinate system is placed
+ in the center of the lens (its polepiece, pinhole, or another
+ point of reference). The origin should be specified in the :ref:`NXtransformations`.
- For details of electro-magnetic lenses in the literature see e.g. `L. Reimer `_
+ For details of electro-magnetic lenses in the literature
+ see e.g. `L. Reimer `_
DEBUG - ===== ATTRS (//entry/instrument/electronanalyser/collectioncolumn/lens_UFA@NX_class)
DEBUG - value: NXlens_em
@@ -1796,8 +2047,10 @@ NXlens_em.nxdl.xml:/voltage
DEBUG - <>
DEBUG - documentation (NXlens_em.nxdl.xml:/voltage):
DEBUG -
- Excitation voltage of the lens. For dipoles it is a single number. For higher
- orders, it is an array.
+ Excitation voltage of the lens.
+
+ For dipoles it is a single number.
+ For higher order multipoles, it is an array.
DEBUG - ===== ATTRS (//entry/instrument/electronanalyser/collectioncolumn/lens_UFA/voltage@units)
DEBUG - value: V
@@ -1912,8 +2165,8 @@ DEBUG -
DEBUG - documentation (NXdetector.nxdl.xml:):
DEBUG -
- A detector, detector bank, or multidetector.
-
+ A detector, detector bank, or multidetector.
+
DEBUG - ===== ATTRS (//entry/instrument/electronanalyser/detector@NX_class)
DEBUG - value: NXdetector
DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXelectronanalyser', 'NXdetector']
@@ -1925,19 +2178,26 @@ DEBUG - @NX_class [NX_CHAR]
DEBUG -
DEBUG - ===== FIELD (//entry/instrument/electronanalyser/detector/amplifier_bias):
DEBUG - value: 30.0
-DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXelectronanalyser', 'NXdetector']
-DEBUG - NOT IN SCHEMA
+DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXelectronanalyser', 'NXdetector', 'NX_FLOAT']
+DEBUG - classes:
+NXdetector.nxdl.xml:/amplifier_bias
+DEBUG - <>
+DEBUG - documentation (NXdetector.nxdl.xml:/amplifier_bias):
DEBUG -
+ The low voltage of the amplifier migh not be the ground.
+
DEBUG - ===== ATTRS (//entry/instrument/electronanalyser/detector/amplifier_bias@units)
DEBUG - value: V
-DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXelectronanalyser', 'NXdetector']
-DEBUG - NOT IN SCHEMA
-DEBUG -
+DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXelectronanalyser', 'NXdetector', 'NX_FLOAT']
+DEBUG - classes:
+NXdetector.nxdl.xml:/amplifier_bias
+DEBUG - NXdetector.nxdl.xml:/amplifier_bias@units [NX_VOLTAGE]
DEBUG - ===== FIELD (//entry/instrument/electronanalyser/detector/amplifier_type):
DEBUG - value: b'MCP'
DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXelectronanalyser', 'NXdetector', 'NX_CHAR']
DEBUG - classes:
NXmpes.nxdl.xml:/ENTRY/INSTRUMENT/ELECTRONANALYSER/DETECTOR/amplifier_type
+NXdetector.nxdl.xml:/amplifier_type
DEBUG - <>
DEBUG - enumeration (NXmpes.nxdl.xml:/ENTRY/INSTRUMENT/ELECTRONANALYSER/DETECTOR/amplifier_type):
DEBUG - -> MCP
@@ -1946,21 +2206,32 @@ DEBUG - documentation (NXmpes.nxdl.xml:/ENTRY/INSTRUMENT/ELECTRONANALYSER/DETECT
DEBUG -
Type of electron amplifier in the first amplification step.
+DEBUG - documentation (NXdetector.nxdl.xml:/amplifier_type):
+DEBUG -
+ Type of electron amplifier, MCP, channeltron, etc.
+
DEBUG - ===== FIELD (//entry/instrument/electronanalyser/detector/amplifier_voltage):
DEBUG - value: 2340.0
-DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXelectronanalyser', 'NXdetector']
-DEBUG - NOT IN SCHEMA
+DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXelectronanalyser', 'NXdetector', 'NX_FLOAT']
+DEBUG - classes:
+NXdetector.nxdl.xml:/amplifier_voltage
+DEBUG - <>
+DEBUG - documentation (NXdetector.nxdl.xml:/amplifier_voltage):
DEBUG -
+ Voltage applied to the amplifier.
+
DEBUG - ===== ATTRS (//entry/instrument/electronanalyser/detector/amplifier_voltage@units)
DEBUG - value: V
-DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXelectronanalyser', 'NXdetector']
-DEBUG - NOT IN SCHEMA
-DEBUG -
+DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXelectronanalyser', 'NXdetector', 'NX_FLOAT']
+DEBUG - classes:
+NXdetector.nxdl.xml:/amplifier_voltage
+DEBUG - NXdetector.nxdl.xml:/amplifier_voltage@units [NX_VOLTAGE]
DEBUG - ===== FIELD (//entry/instrument/electronanalyser/detector/detector_type):
DEBUG - value: b'DLD'
DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXelectronanalyser', 'NXdetector', 'NX_CHAR']
DEBUG - classes:
NXmpes.nxdl.xml:/ENTRY/INSTRUMENT/ELECTRONANALYSER/DETECTOR/detector_type
+NXdetector.nxdl.xml:/detector_type
DEBUG - <>
DEBUG - enumeration (NXmpes.nxdl.xml:/ENTRY/INSTRUMENT/ELECTRONANALYSER/DETECTOR/detector_type):
DEBUG - -> DLD
@@ -1973,21 +2244,36 @@ DEBUG - documentation (NXmpes.nxdl.xml:/ENTRY/INSTRUMENT/ELECTRONANALYSER/DETECT
DEBUG -
Description of the detector type.
+DEBUG - documentation (NXdetector.nxdl.xml:/detector_type):
+DEBUG -
+ Description of the detector type, DLD, Phosphor+CCD, CMOS.
+
DEBUG - ===== FIELD (//entry/instrument/electronanalyser/detector/detector_voltage):
DEBUG - value: 399.99712810186986
-DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXelectronanalyser', 'NXdetector']
-DEBUG - NOT IN SCHEMA
-DEBUG -
+DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXelectronanalyser', 'NXdetector', 'NX_FLOAT']
+DEBUG - classes:
+NXdetector.nxdl.xml:/detector_voltage
+DEBUG - <>
+DEBUG - documentation (NXdetector.nxdl.xml:/detector_voltage):
+DEBUG -
+ Voltage applied to detector.
+
DEBUG - ===== ATTRS (//entry/instrument/electronanalyser/detector/detector_voltage@units)
DEBUG - value: V
-DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXelectronanalyser', 'NXdetector']
-DEBUG - NOT IN SCHEMA
-DEBUG -
+DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXelectronanalyser', 'NXdetector', 'NX_FLOAT']
+DEBUG - classes:
+NXdetector.nxdl.xml:/detector_voltage
+DEBUG - NXdetector.nxdl.xml:/detector_voltage@units [NX_VOLTAGE]
DEBUG - ===== FIELD (//entry/instrument/electronanalyser/detector/sensor_pixels):
DEBUG - value: [1800 1800]
-DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXelectronanalyser', 'NXdetector']
-DEBUG - NOT IN SCHEMA
+DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXelectronanalyser', 'NXdetector', 'NX_INT']
+DEBUG - classes:
+NXdetector.nxdl.xml:/sensor_pixels
+DEBUG - <>
+DEBUG - documentation (NXdetector.nxdl.xml:/sensor_pixels):
DEBUG -
+ Number of raw active elements in each dimension. Important for swept scans.
+
DEBUG - ===== FIELD (//entry/instrument/electronanalyser/energy_resolution):
DEBUG - value: 110.0
DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXelectronanalyser', 'NX_FLOAT']
@@ -2441,15 +2727,22 @@ DEBUG - value: 140.0
DEBUG - classpath: ['NXentry', 'NXinstrument', 'NX_FLOAT']
DEBUG - classes:
NXmpes.nxdl.xml:/ENTRY/INSTRUMENT/energy_resolution
+NXinstrument.nxdl.xml:/energy_resolution
DEBUG - <>
DEBUG - documentation (NXmpes.nxdl.xml:/ENTRY/INSTRUMENT/energy_resolution):
DEBUG -
+DEBUG - documentation (NXinstrument.nxdl.xml:/energy_resolution):
+DEBUG -
+ Energy resolution of the experiment (FWHM or gaussian broadening)
+
DEBUG - ===== ATTRS (//entry/instrument/energy_resolution@units)
DEBUG - value: meV
DEBUG - classpath: ['NXentry', 'NXinstrument', 'NX_FLOAT']
DEBUG - classes:
NXmpes.nxdl.xml:/ENTRY/INSTRUMENT/energy_resolution
+NXinstrument.nxdl.xml:/energy_resolution
DEBUG - NXmpes.nxdl.xml:/ENTRY/INSTRUMENT/energy_resolution@units [NX_ENERGY]
+DEBUG - NXinstrument.nxdl.xml:/energy_resolution@units [NX_ENERGY]
DEBUG - ===== GROUP (//entry/instrument/manipulator [NXmpes::/NXentry/NXinstrument/NXmanipulator]):
DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXmanipulator']
DEBUG - classes:
@@ -2854,14 +3147,20 @@ DEBUG -
DEBUG - ===== FIELD (//entry/instrument/momentum_resolution):
DEBUG - value: 0.08
-DEBUG - classpath: ['NXentry', 'NXinstrument']
-DEBUG - NOT IN SCHEMA
+DEBUG - classpath: ['NXentry', 'NXinstrument', 'NX_FLOAT']
+DEBUG - classes:
+NXinstrument.nxdl.xml:/momentum_resolution
+DEBUG - <>
+DEBUG - documentation (NXinstrument.nxdl.xml:/momentum_resolution):
DEBUG -
+ Momentum resolution of the experiment (FWHM)
+
DEBUG - ===== ATTRS (//entry/instrument/momentum_resolution@units)
DEBUG - value: 1/angstrom
-DEBUG - classpath: ['NXentry', 'NXinstrument']
-DEBUG - NOT IN SCHEMA
-DEBUG -
+DEBUG - classpath: ['NXentry', 'NXinstrument', 'NX_FLOAT']
+DEBUG - classes:
+NXinstrument.nxdl.xml:/momentum_resolution
+DEBUG - NXinstrument.nxdl.xml:/momentum_resolution@units [NX_WAVENUMBER]
DEBUG - ===== FIELD (//entry/instrument/name):
DEBUG - value: b'Time-of-flight momentum microscope equipped delay line detector, at the endstation of the high rep-rate HHG source at FHI'
DEBUG - classpath: ['NXentry', 'NXinstrument', 'NX_CHAR']
@@ -2869,7 +3168,9 @@ DEBUG - classes:
NXinstrument.nxdl.xml:/name
DEBUG - <>
DEBUG - documentation (NXinstrument.nxdl.xml:/name):
-DEBUG - Name of instrument
+DEBUG -
+ Name of instrument
+
DEBUG - ===== ATTRS (//entry/instrument/name@short_name)
DEBUG - value: TR-ARPES @ FHI
DEBUG - classpath: ['NXentry', 'NXinstrument', 'NX_CHAR']
@@ -2878,7 +3179,9 @@ NXinstrument.nxdl.xml:/name
DEBUG - NXinstrument.nxdl.xml:/name@short_name - [NX_CHAR]
DEBUG - <>
DEBUG - documentation (NXinstrument.nxdl.xml:/name/short_name):
-DEBUG - short name for instrument, perhaps the acronym
+DEBUG -
+ short name for instrument, perhaps the acronym
+
DEBUG - ===== GROUP (//entry/instrument/source [NXmpes::/NXentry/NXinstrument/NXsource]):
DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXsource']
DEBUG - classes:
@@ -2896,7 +3199,9 @@ DEBUG -
DEBUG - documentation (NXinstrument.nxdl.xml:/SOURCE):
DEBUG -
DEBUG - documentation (NXsource.nxdl.xml:):
-DEBUG - The neutron or x-ray storage ring/facility.
+DEBUG -
+ The neutron or x-ray storage ring/facility.
+
DEBUG - ===== ATTRS (//entry/instrument/source@NX_class)
DEBUG - value: NXsource
DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXsource']
@@ -2913,7 +3218,9 @@ DEBUG - classes:
NXsource.nxdl.xml:/frequency
DEBUG - <>
DEBUG - documentation (NXsource.nxdl.xml:/frequency):
-DEBUG - Frequency of pulsed source
+DEBUG -
+ Frequency of pulsed source
+
DEBUG - ===== ATTRS (//entry/instrument/source/frequency@units)
DEBUG - value: kHz
DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXsource', 'NX_FLOAT']
@@ -2930,7 +3237,9 @@ DEBUG - enumeration (NXsource.nxdl.xml:/mode):
DEBUG - -> Single Bunch
DEBUG - -> Multi Bunch
DEBUG - documentation (NXsource.nxdl.xml:/mode):
-DEBUG - source operating mode
+DEBUG -
+ source operating mode
+
DEBUG - ===== FIELD (//entry/instrument/source/name):
DEBUG - value: b'HHG @ TR-ARPES @ FHI'
DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXsource', 'NX_CHAR']
@@ -2941,17 +3250,26 @@ DEBUG - <>
DEBUG - documentation (NXmpes.nxdl.xml:/ENTRY/INSTRUMENT/SOURCE/name):
DEBUG -
DEBUG - documentation (NXsource.nxdl.xml:/name):
-DEBUG - Name of source
+DEBUG -
+ Name of source
+
DEBUG - ===== FIELD (//entry/instrument/source/photon_energy):
DEBUG - value: 21.7
-DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXsource']
-DEBUG - NOT IN SCHEMA
+DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXsource', 'NX_FLOAT']
+DEBUG - classes:
+NXsource.nxdl.xml:/photon_energy
+DEBUG - <>
+DEBUG - documentation (NXsource.nxdl.xml:/photon_energy):
DEBUG -
+ The center photon energy of the source, before it is
+ monochromatized or converted
+
DEBUG - ===== ATTRS (//entry/instrument/source/photon_energy@units)
DEBUG - value: eV
-DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXsource']
-DEBUG - NOT IN SCHEMA
-DEBUG -
+DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXsource', 'NX_FLOAT']
+DEBUG - classes:
+NXsource.nxdl.xml:/photon_energy
+DEBUG - NXsource.nxdl.xml:/photon_energy@units [NX_ENERGY]
DEBUG - ===== FIELD (//entry/instrument/source/probe):
DEBUG - value: b'ultraviolet'
DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXsource', 'NX_CHAR']
@@ -2978,7 +3296,9 @@ DEBUG -
restricted.
DEBUG - documentation (NXsource.nxdl.xml:/probe):
-DEBUG - type of radiation probe (pick one from the enumerated list and spell exactly)
+DEBUG -
+ type of radiation probe (pick one from the enumerated list and spell exactly)
+
DEBUG - ===== FIELD (//entry/instrument/source/type):
DEBUG - value: b'HHG laser'
DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXsource', 'NX_CHAR']
@@ -3013,7 +3333,9 @@ DEBUG - -> Metal Jet X-ray
DEBUG - documentation (NXmpes.nxdl.xml:/ENTRY/INSTRUMENT/SOURCE/type):
DEBUG -
DEBUG - documentation (NXsource.nxdl.xml:/type):
-DEBUG - type of radiation source (pick one from the enumerated list and spell exactly)
+DEBUG -
+ type of radiation source (pick one from the enumerated list and spell exactly)
+
DEBUG - ===== GROUP (//entry/instrument/source_pump [NXmpes::/NXentry/NXinstrument/NXsource]):
DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXsource']
DEBUG - classes:
@@ -3031,7 +3353,9 @@ DEBUG -
DEBUG - documentation (NXinstrument.nxdl.xml:/SOURCE):
DEBUG -
DEBUG - documentation (NXsource.nxdl.xml:):
-DEBUG - The neutron or x-ray storage ring/facility.
+DEBUG -
+ The neutron or x-ray storage ring/facility.
+
DEBUG - ===== ATTRS (//entry/instrument/source_pump@NX_class)
DEBUG - value: NXsource
DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXsource']
@@ -3048,7 +3372,9 @@ DEBUG - classes:
NXsource.nxdl.xml:/frequency
DEBUG - <>
DEBUG - documentation (NXsource.nxdl.xml:/frequency):
-DEBUG - Frequency of pulsed source
+DEBUG -
+ Frequency of pulsed source
+
DEBUG - ===== ATTRS (//entry/instrument/source_pump/frequency@units)
DEBUG - value: kHz
DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXsource', 'NX_FLOAT']
@@ -3065,7 +3391,9 @@ DEBUG - enumeration (NXsource.nxdl.xml:/mode):
DEBUG - -> Single Bunch
DEBUG - -> Multi Bunch
DEBUG - documentation (NXsource.nxdl.xml:/mode):
-DEBUG - source operating mode
+DEBUG -
+ source operating mode
+
DEBUG - ===== FIELD (//entry/instrument/source_pump/name):
DEBUG - value: b'OPCPA @ TR-ARPES @ FHI'
DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXsource', 'NX_CHAR']
@@ -3076,17 +3404,26 @@ DEBUG - <>
DEBUG - documentation (NXmpes.nxdl.xml:/ENTRY/INSTRUMENT/SOURCE/name):
DEBUG -
DEBUG - documentation (NXsource.nxdl.xml:/name):
-DEBUG - Name of source
+DEBUG -
+ Name of source
+
DEBUG - ===== FIELD (//entry/instrument/source_pump/photon_energy):
DEBUG - value: 1.2
-DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXsource']
-DEBUG - NOT IN SCHEMA
+DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXsource', 'NX_FLOAT']
+DEBUG - classes:
+NXsource.nxdl.xml:/photon_energy
+DEBUG - <>
+DEBUG - documentation (NXsource.nxdl.xml:/photon_energy):
DEBUG -
+ The center photon energy of the source, before it is
+ monochromatized or converted
+
DEBUG - ===== ATTRS (//entry/instrument/source_pump/photon_energy@units)
DEBUG - value: eV
-DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXsource']
-DEBUG - NOT IN SCHEMA
-DEBUG -
+DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXsource', 'NX_FLOAT']
+DEBUG - classes:
+NXsource.nxdl.xml:/photon_energy
+DEBUG - NXsource.nxdl.xml:/photon_energy@units [NX_ENERGY]
DEBUG - ===== FIELD (//entry/instrument/source_pump/probe):
DEBUG - value: b'visible light'
DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXsource', 'NX_CHAR']
@@ -3113,7 +3450,9 @@ DEBUG -
restricted.
DEBUG - documentation (NXsource.nxdl.xml:/probe):
-DEBUG - type of radiation probe (pick one from the enumerated list and spell exactly)
+DEBUG -
+ type of radiation probe (pick one from the enumerated list and spell exactly)
+
DEBUG - ===== FIELD (//entry/instrument/source_pump/type):
DEBUG - value: b'Optical Laser'
DEBUG - classpath: ['NXentry', 'NXinstrument', 'NXsource', 'NX_CHAR']
@@ -3148,17 +3487,25 @@ DEBUG - -> Metal Jet X-ray
DEBUG - documentation (NXmpes.nxdl.xml:/ENTRY/INSTRUMENT/SOURCE/type):
DEBUG -
DEBUG - documentation (NXsource.nxdl.xml:/type):
-DEBUG - type of radiation source (pick one from the enumerated list and spell exactly)
+DEBUG -
+ type of radiation source (pick one from the enumerated list and spell exactly)
+
DEBUG - ===== FIELD (//entry/instrument/temporal_resolution):
DEBUG - value: 35.0
-DEBUG - classpath: ['NXentry', 'NXinstrument']
-DEBUG - NOT IN SCHEMA
+DEBUG - classpath: ['NXentry', 'NXinstrument', 'NX_FLOAT']
+DEBUG - classes:
+NXinstrument.nxdl.xml:/temporal_resolution
+DEBUG - <>
+DEBUG - documentation (NXinstrument.nxdl.xml:/temporal_resolution):
DEBUG -
+ Temporal resolution of the experiment (FWHM)
+
DEBUG - ===== ATTRS (//entry/instrument/temporal_resolution@units)
DEBUG - value: fs
-DEBUG - classpath: ['NXentry', 'NXinstrument']
-DEBUG - NOT IN SCHEMA
-DEBUG -
+DEBUG - classpath: ['NXentry', 'NXinstrument', 'NX_FLOAT']
+DEBUG - classes:
+NXinstrument.nxdl.xml:/temporal_resolution
+DEBUG - NXinstrument.nxdl.xml:/temporal_resolution@units [NX_TIME]
DEBUG - ===== GROUP (//entry/process [NXmpes::/NXentry/NXprocess]):
DEBUG - classpath: ['NXentry', 'NXprocess']
DEBUG - classes:
@@ -3175,7 +3522,9 @@ DEBUG -
DEBUG - documentation (NXentry.nxdl.xml:/PROCESS):
DEBUG -
DEBUG - documentation (NXprocess.nxdl.xml:):
-DEBUG - Document an event of data processing, reconstruction, or analysis for this data.
+DEBUG -
+ Document an event of data processing, reconstruction, or analysis for this data.
+
DEBUG - ===== ATTRS (//entry/process@NX_class)
DEBUG - value: NXprocess
DEBUG - classpath: ['NXentry', 'NXprocess']
@@ -3185,48 +3534,109 @@ NXentry.nxdl.xml:/PROCESS
NXprocess.nxdl.xml:
DEBUG - @NX_class [NX_CHAR]
DEBUG -
-DEBUG - ===== GROUP (//entry/process/distortion [NXmpes::/NXentry/NXprocess/distortion]):
-DEBUG - classpath: ['NXentry', 'NXprocess']
-DEBUG - NOT IN SCHEMA
+DEBUG - ===== GROUP (//entry/process/distortion [NXmpes::/NXentry/NXprocess/NXdistortion]):
+DEBUG - classpath: ['NXentry', 'NXprocess', 'NXdistortion']
+DEBUG - classes:
+NXprocess.nxdl.xml:/DISTORTION
+NXdistortion.nxdl.xml:
+DEBUG - <>
+DEBUG - documentation (NXprocess.nxdl.xml:/DISTORTION):
+DEBUG -
+ Describes the operations of image distortion correction
+
+DEBUG - documentation (NXdistortion.nxdl.xml:):
+DEBUG -
+ Subclass of NXprocess to describe post-processing distortion correction.
+
+DEBUG - ===== ATTRS (//entry/process/distortion@NX_class)
+DEBUG - value: NXdistortion
+DEBUG - classpath: ['NXentry', 'NXprocess', 'NXdistortion']
+DEBUG - classes:
+NXprocess.nxdl.xml:/DISTORTION
+NXdistortion.nxdl.xml:
+DEBUG - @NX_class [NX_CHAR]
DEBUG -
DEBUG - ===== FIELD (//entry/process/distortion/applied):
DEBUG - value: True
-DEBUG - classpath: ['NXentry', 'NXprocess']
-DEBUG - NOT IN SCHEMA
+DEBUG - classpath: ['NXentry', 'NXprocess', 'NXdistortion', 'NX_BOOLEAN']
+DEBUG - classes:
+NXdistortion.nxdl.xml:/applied
+DEBUG - <>
+DEBUG - documentation (NXdistortion.nxdl.xml:/applied):
DEBUG -
+ Has the distortion correction been applied?
+
DEBUG - ===== FIELD (//entry/process/distortion/cdeform_field):
DEBUG - value: [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. ...
-DEBUG - classpath: ['NXentry', 'NXprocess']
-DEBUG - NOT IN SCHEMA
+DEBUG - classpath: ['NXentry', 'NXprocess', 'NXdistortion', 'NX_FLOAT']
+DEBUG - classes:
+NXdistortion.nxdl.xml:/cdeform_field
+DEBUG - <>
+DEBUG - documentation (NXdistortion.nxdl.xml:/cdeform_field):
DEBUG -
+ Column deformation field for general non-rigid distortion corrections. 2D matrix
+ holding the column information of the mapping of each original coordinate.
+
DEBUG - ===== FIELD (//entry/process/distortion/original_centre):
DEBUG - value: [203. 215.]
-DEBUG - classpath: ['NXentry', 'NXprocess']
-DEBUG - NOT IN SCHEMA
+DEBUG - classpath: ['NXentry', 'NXprocess', 'NXdistortion', 'NX_FLOAT']
+DEBUG - classes:
+NXdistortion.nxdl.xml:/original_centre
+DEBUG - <>
+DEBUG - documentation (NXdistortion.nxdl.xml:/original_centre):
DEBUG -
+ For symmetry-guided distortion correction. Here we record the coordinates of the
+ symmetry centre point.
+
DEBUG - ===== FIELD (//entry/process/distortion/original_points):
DEBUG - value: [166. 283.]
-DEBUG - classpath: ['NXentry', 'NXprocess']
-DEBUG - NOT IN SCHEMA
+DEBUG - classpath: ['NXentry', 'NXprocess', 'NXdistortion', 'NX_FLOAT']
+DEBUG - classes:
+NXdistortion.nxdl.xml:/original_points
+DEBUG - <>
+DEBUG - documentation (NXdistortion.nxdl.xml:/original_points):
DEBUG -
+ For symmetry-guided distortion correction. Here we record the coordinates of the
+ relevant symmetry points.
+
DEBUG - ===== FIELD (//entry/process/distortion/rdeform_field):
DEBUG - value: [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. ...
-DEBUG - classpath: ['NXentry', 'NXprocess']
-DEBUG - NOT IN SCHEMA
+DEBUG - classpath: ['NXentry', 'NXprocess', 'NXdistortion', 'NX_FLOAT']
+DEBUG - classes:
+NXdistortion.nxdl.xml:/rdeform_field
+DEBUG - <>
+DEBUG - documentation (NXdistortion.nxdl.xml:/rdeform_field):
DEBUG -
+ Row deformation field for general non-rigid distortion corrections. 2D matrix
+ holding the row information of the mapping of each original coordinate.
+
DEBUG - ===== FIELD (//entry/process/distortion/symmetry):
DEBUG - value: 6
-DEBUG - classpath: ['NXentry', 'NXprocess']
-DEBUG - NOT IN SCHEMA
+DEBUG - classpath: ['NXentry', 'NXprocess', 'NXdistortion', 'NX_INT']
+DEBUG - classes:
+NXdistortion.nxdl.xml:/symmetry
+DEBUG - <>
+DEBUG - documentation (NXdistortion.nxdl.xml:/symmetry):
DEBUG -
+ For `symmetry-guided distortion correction`_,
+ where a pattern of features is mapped to the regular geometric structure expected
+ from the symmetry. Here we record the number of elementary symmetry operations.
+
+ .. _symmetry-guided distortion correction: https://www.sciencedirect.com/science/article/abs/pii/S0304399118303474?via%3Dihub
+
DEBUG - ===== GROUP (//entry/process/energy_calibration [NXmpes::/NXentry/NXprocess/NXcalibration]):
DEBUG - classpath: ['NXentry', 'NXprocess', 'NXcalibration']
DEBUG - classes:
NXmpes.nxdl.xml:/ENTRY/PROCESS/energy_calibration
+NXprocess.nxdl.xml:/CALIBRATION
NXcalibration.nxdl.xml:
DEBUG - <>
DEBUG - documentation (NXmpes.nxdl.xml:/ENTRY/PROCESS/energy_calibration):
DEBUG -
+DEBUG - documentation (NXprocess.nxdl.xml:/CALIBRATION):
+DEBUG -
+ Describes the operations of calibration procedures, e.g. axis calibrations.
+
DEBUG - documentation (NXcalibration.nxdl.xml:):
DEBUG -
Subclass of NXprocess to describe post-processing calibrations.
@@ -3236,6 +3646,7 @@ DEBUG - value: NXcalibration
DEBUG - classpath: ['NXentry', 'NXprocess', 'NXcalibration']
DEBUG - classes:
NXmpes.nxdl.xml:/ENTRY/PROCESS/energy_calibration
+NXprocess.nxdl.xml:/CALIBRATION
NXcalibration.nxdl.xml:
DEBUG - @NX_class [NX_CHAR]
DEBUG -
@@ -3282,7 +3693,13 @@ DEBUG -
Use a0, a1, ..., an for the coefficients, corresponding to the values in the coefficients field.
- Use x0, x1, ..., xn for the variables.
+ Use x0, x1, ..., xn for the nth position in the `original_axis` field.
+ If there is the symbol attribute specified for the `original_axis` this may be used instead of x.
+ If you want to use the whole axis use `x`.
+ Alternate axis can also be available as specified by the `input_SYMBOL` field.
+ The data should then be referred here by the `SYMBOL` name, e.g., for a field
+ name `input_my_field` it should be referred here by `my_field` or `my_field0` if
+ you want to read the zeroth element of the array.
The formula should be numpy compliant.
@@ -3296,152 +3713,508 @@ DEBUG - documentation (NXcalibration.nxdl.xml:/original_axis):
DEBUG -
Vector containing the data coordinates in the original uncalibrated axis
-DEBUG - ===== GROUP (//entry/process/kx_calibration [NXmpes::/NXentry/NXprocess/kx_calibration]):
-DEBUG - classpath: ['NXentry', 'NXprocess']
-DEBUG - NOT IN SCHEMA
+DEBUG - ===== GROUP (//entry/process/kx_calibration [NXmpes::/NXentry/NXprocess/NXcalibration]):
+DEBUG - classpath: ['NXentry', 'NXprocess', 'NXcalibration']
+DEBUG - classes:
+NXprocess.nxdl.xml:/CALIBRATION
+NXcalibration.nxdl.xml:
+DEBUG - <>
+DEBUG - documentation (NXprocess.nxdl.xml:/CALIBRATION):
+DEBUG -
+ Describes the operations of calibration procedures, e.g. axis calibrations.
+
+DEBUG - documentation (NXcalibration.nxdl.xml:):
+DEBUG -
+ Subclass of NXprocess to describe post-processing calibrations.
+
+DEBUG - ===== ATTRS (//entry/process/kx_calibration@NX_class)
+DEBUG - value: NXcalibration
+DEBUG - classpath: ['NXentry', 'NXprocess', 'NXcalibration']
+DEBUG - classes:
+NXprocess.nxdl.xml:/CALIBRATION
+NXcalibration.nxdl.xml:
+DEBUG - @NX_class [NX_CHAR]
DEBUG -
DEBUG - ===== FIELD (//entry/process/kx_calibration/applied):
DEBUG - value: True
-DEBUG - classpath: ['NXentry', 'NXprocess']
-DEBUG - NOT IN SCHEMA
+DEBUG - classpath: ['NXentry', 'NXprocess', 'NXcalibration', 'NX_BOOLEAN']
+DEBUG - classes:
+NXcalibration.nxdl.xml:/applied
+DEBUG - <>
+DEBUG - documentation (NXcalibration.nxdl.xml:/applied):
DEBUG -
+ Has the calibration been applied?
+
DEBUG - ===== FIELD (//entry/process/kx_calibration/calibrated_axis):
DEBUG - value: [-2.68021375 -2.66974416 -2.65927458 -2.64880499 -2.63833541 -2.62786582 ...
-DEBUG - classpath: ['NXentry', 'NXprocess']
-DEBUG - NOT IN SCHEMA
+DEBUG - classpath: ['NXentry', 'NXprocess', 'NXcalibration', 'NX_FLOAT']
+DEBUG - classes:
+NXcalibration.nxdl.xml:/calibrated_axis
+DEBUG - <>
+DEBUG - documentation (NXcalibration.nxdl.xml:/calibrated_axis):
DEBUG -
+ A vector representing the axis after calibration, matching the data length
+
DEBUG - ===== FIELD (//entry/process/kx_calibration/offset):
DEBUG - value: 256.0
-DEBUG - classpath: ['NXentry', 'NXprocess']
-DEBUG - NOT IN SCHEMA
+DEBUG - classpath: ['NXentry', 'NXprocess', 'NXcalibration', 'NX_FLOAT']
+DEBUG - classes:
+NXcalibration.nxdl.xml:/offset
+DEBUG - <>
+DEBUG - documentation (NXcalibration.nxdl.xml:/offset):
DEBUG -
+ For linear calibration. Offset parameter.
+ This is should yield the relation `calibrated_axis` = `scaling` * `original_axis` + `offset`.
+
DEBUG - ===== FIELD (//entry/process/kx_calibration/scaling):
DEBUG - value: 0.01046958495673419
-DEBUG - classpath: ['NXentry', 'NXprocess']
-DEBUG - NOT IN SCHEMA
+DEBUG - classpath: ['NXentry', 'NXprocess', 'NXcalibration', 'NX_FLOAT']
+DEBUG - classes:
+NXcalibration.nxdl.xml:/scaling
+DEBUG - <>
+DEBUG - documentation (NXcalibration.nxdl.xml:/scaling):
DEBUG -
-DEBUG - ===== GROUP (//entry/process/ky_calibration [NXmpes::/NXentry/NXprocess/ky_calibration]):
-DEBUG - classpath: ['NXentry', 'NXprocess']
-DEBUG - NOT IN SCHEMA
+ For linear calibration. Scaling parameter.
+ This is should yield the relation `calibrated_axis` = `scaling` * `original_axis` + `offset`.
+
+DEBUG - ===== GROUP (//entry/process/ky_calibration [NXmpes::/NXentry/NXprocess/NXcalibration]):
+DEBUG - classpath: ['NXentry', 'NXprocess', 'NXcalibration']
+DEBUG - classes:
+NXprocess.nxdl.xml:/CALIBRATION
+NXcalibration.nxdl.xml:
+DEBUG - <>
+DEBUG - documentation (NXprocess.nxdl.xml:/CALIBRATION):
+DEBUG -
+ Describes the operations of calibration procedures, e.g. axis calibrations.
+
+DEBUG - documentation (NXcalibration.nxdl.xml:):
+DEBUG -
+ Subclass of NXprocess to describe post-processing calibrations.
+
+DEBUG - ===== ATTRS (//entry/process/ky_calibration@NX_class)
+DEBUG - value: NXcalibration
+DEBUG - classpath: ['NXentry', 'NXprocess', 'NXcalibration']
+DEBUG - classes:
+NXprocess.nxdl.xml:/CALIBRATION
+NXcalibration.nxdl.xml:
+DEBUG - @NX_class [NX_CHAR]
DEBUG -
DEBUG - ===== FIELD (//entry/process/ky_calibration/applied):
DEBUG - value: True
-DEBUG - classpath: ['NXentry', 'NXprocess']
-DEBUG - NOT IN SCHEMA
+DEBUG - classpath: ['NXentry', 'NXprocess', 'NXcalibration', 'NX_BOOLEAN']
+DEBUG - classes:
+NXcalibration.nxdl.xml:/applied
+DEBUG - <>
+DEBUG - documentation (NXcalibration.nxdl.xml:/applied):
DEBUG -
+ Has the calibration been applied?
+
DEBUG - ===== FIELD (//entry/process/ky_calibration/calibrated_axis):
DEBUG - value: [-2.68021375 -2.66974416 -2.65927458 -2.64880499 -2.63833541 -2.62786582 ...
-DEBUG - classpath: ['NXentry', 'NXprocess']
-DEBUG - NOT IN SCHEMA
+DEBUG - classpath: ['NXentry', 'NXprocess', 'NXcalibration', 'NX_FLOAT']
+DEBUG - classes:
+NXcalibration.nxdl.xml:/calibrated_axis
+DEBUG - <>
+DEBUG - documentation (NXcalibration.nxdl.xml:/calibrated_axis):
DEBUG -
+ A vector representing the axis after calibration, matching the data length
+
DEBUG - ===== FIELD (//entry/process/ky_calibration/offset):
DEBUG - value: 256.0
-DEBUG - classpath: ['NXentry', 'NXprocess']
-DEBUG - NOT IN SCHEMA
+DEBUG - classpath: ['NXentry', 'NXprocess', 'NXcalibration', 'NX_FLOAT']
+DEBUG - classes:
+NXcalibration.nxdl.xml:/offset
+DEBUG - <>
+DEBUG - documentation (NXcalibration.nxdl.xml:/offset):
DEBUG -
+ For linear calibration. Offset parameter.
+ This is should yield the relation `calibrated_axis` = `scaling` * `original_axis` + `offset`.
+
DEBUG - ===== FIELD (//entry/process/ky_calibration/scaling):
DEBUG - value: 0.01046958495673419
-DEBUG - classpath: ['NXentry', 'NXprocess']
-DEBUG - NOT IN SCHEMA
+DEBUG - classpath: ['NXentry', 'NXprocess', 'NXcalibration', 'NX_FLOAT']
+DEBUG - classes:
+NXcalibration.nxdl.xml:/scaling
+DEBUG - <>
+DEBUG - documentation (NXcalibration.nxdl.xml:/scaling):
DEBUG -
-DEBUG - ===== GROUP (//entry/process/registration [NXmpes::/NXentry/NXprocess/registration]):
-DEBUG - classpath: ['NXentry', 'NXprocess']
-DEBUG - NOT IN SCHEMA
+ For linear calibration. Scaling parameter.
+ This is should yield the relation `calibrated_axis` = `scaling` * `original_axis` + `offset`.
+
+DEBUG - ===== GROUP (//entry/process/registration [NXmpes::/NXentry/NXprocess/NXregistration]):
+DEBUG - classpath: ['NXentry', 'NXprocess', 'NXregistration']
+DEBUG - classes:
+NXprocess.nxdl.xml:/REGISTRATION
+NXregistration.nxdl.xml:
+DEBUG - <>
+DEBUG - documentation (NXprocess.nxdl.xml:/REGISTRATION):
+DEBUG -
+ Describes the operations of image registration
+
+DEBUG - documentation (NXregistration.nxdl.xml:):
+DEBUG -
+ Describes image registration procedures.
+
+DEBUG - ===== ATTRS (//entry/process/registration@NX_class)
+DEBUG - value: NXregistration
+DEBUG - classpath: ['NXentry', 'NXprocess', 'NXregistration']
+DEBUG - classes:
+NXprocess.nxdl.xml:/REGISTRATION
+NXregistration.nxdl.xml:
+DEBUG - @NX_class [NX_CHAR]
DEBUG -
DEBUG - ===== FIELD (//entry/process/registration/applied):
DEBUG - value: True
-DEBUG - classpath: ['NXentry', 'NXprocess']
-DEBUG - NOT IN SCHEMA
+DEBUG - classpath: ['NXentry', 'NXprocess', 'NXregistration', 'NX_BOOLEAN']
+DEBUG - classes:
+NXregistration.nxdl.xml:/applied
+DEBUG - <>
+DEBUG - documentation (NXregistration.nxdl.xml:/applied):
DEBUG -
+ Has the registration been applied?
+
DEBUG - ===== FIELD (//entry/process/registration/depends_on):
DEBUG - value: b'/entry/process/registration/tranformations/rot_z'
-DEBUG - classpath: ['NXentry', 'NXprocess']
-DEBUG - NOT IN SCHEMA
-DEBUG -
-DEBUG - ===== GROUP (//entry/process/registration/tranformations [NXmpes::/NXentry/NXprocess/registration/tranformations]):
-DEBUG - classpath: ['NXentry', 'NXprocess']
-DEBUG - NOT IN SCHEMA
-DEBUG -
-DEBUG - ===== FIELD (//entry/process/registration/tranformations/rot_z):
-DEBUG - value: -1.0
-DEBUG - classpath: ['NXentry', 'NXprocess']
-DEBUG - NOT IN SCHEMA
-DEBUG -
-DEBUG - ===== ATTRS (//entry/process/registration/tranformations/rot_z@depends_on)
-DEBUG - value: trans_y
-DEBUG - classpath: ['NXentry', 'NXprocess']
-DEBUG - NOT IN SCHEMA
-DEBUG -
-DEBUG - ===== ATTRS (//entry/process/registration/tranformations/rot_z@offset)
-DEBUG - value: [256. 256. 0.]
-DEBUG - classpath: ['NXentry', 'NXprocess']
-DEBUG - NOT IN SCHEMA
-DEBUG -
-DEBUG - ===== ATTRS (//entry/process/registration/tranformations/rot_z@transformation_type)
-DEBUG - value: rotation
-DEBUG - classpath: ['NXentry', 'NXprocess']
-DEBUG - NOT IN SCHEMA
-DEBUG -
-DEBUG - ===== ATTRS (//entry/process/registration/tranformations/rot_z@units)
-DEBUG - value: degrees
-DEBUG - classpath: ['NXentry', 'NXprocess']
-DEBUG - NOT IN SCHEMA
-DEBUG -
-DEBUG - ===== ATTRS (//entry/process/registration/tranformations/rot_z@vector)
-DEBUG - value: [0. 0. 1.]
-DEBUG - classpath: ['NXentry', 'NXprocess']
-DEBUG - NOT IN SCHEMA
+DEBUG - classpath: ['NXentry', 'NXprocess', 'NXregistration', 'NX_CHAR']
+DEBUG - classes:
+NXregistration.nxdl.xml:/depends_on
+DEBUG - <>
+DEBUG - documentation (NXregistration.nxdl.xml:/depends_on):
DEBUG -
-DEBUG - ===== FIELD (//entry/process/registration/tranformations/trans_x):
-DEBUG - value: 43.0
-DEBUG - classpath: ['NXentry', 'NXprocess']
-DEBUG - NOT IN SCHEMA
+ Specifies the position by pointing to the last transformation in the
+ transformation chain in the NXtransformations group.
+
+DEBUG - ===== GROUP (//entry/process/registration/tranformations [NXmpes::/NXentry/NXprocess/NXregistration/NXtransformations]):
+DEBUG - classpath: ['NXentry', 'NXprocess', 'NXregistration', 'NXtransformations']
+DEBUG - classes:
+NXregistration.nxdl.xml:/TRANSFORMATIONS
+NXtransformations.nxdl.xml:
+DEBUG - <>
+DEBUG - documentation (NXregistration.nxdl.xml:/TRANSFORMATIONS):
DEBUG -
-DEBUG - ===== ATTRS (//entry/process/registration/tranformations/trans_x@depends_on)
-DEBUG - value: .
-DEBUG - classpath: ['NXentry', 'NXprocess']
-DEBUG - NOT IN SCHEMA
+ To describe the operations of image registration (combinations of rigid
+ translations and rotations)
+
+DEBUG - documentation (NXtransformations.nxdl.xml:):
DEBUG -
-DEBUG - ===== ATTRS (//entry/process/registration/tranformations/trans_x@transformation_type)
-DEBUG - value: translation
-DEBUG - classpath: ['NXentry', 'NXprocess']
-DEBUG - NOT IN SCHEMA
+ Collection of axis-based translations and rotations to describe a geometry.
+ May also contain axes that do not move and therefore do not have a transformation
+ type specified, but are useful in understanding coordinate frames within which
+ transformations are done, or in documenting important directions, such as the
+ direction of gravity.
+
+ A nested sequence of transformations lists the translation and rotation steps
+ needed to describe the position and orientation of any movable or fixed device.
+
+ There will be one or more transformations (axes) defined by one or more fields
+ for each transformation. Transformations can also be described by NXlog groups when
+ the values change with time. The all-caps name ``AXISNAME`` designates the
+ particular axis generating a transformation (e.g. a rotation axis or a translation
+ axis or a general axis). The attribute ``units="NX_TRANSFORMATION"`` designates the
+ units will be appropriate to the ``transformation_type`` attribute:
+
+ * ``NX_LENGTH`` for ``translation``
+ * ``NX_ANGLE`` for ``rotation``
+ * ``NX_UNITLESS`` for axes for which no transformation type is specified
+
+ This class will usually contain all axes of a sample stage or goniometer or
+ a detector. The NeXus default McSTAS coordinate frame is assumed, but additional
+ useful coordinate axes may be defined by using axes for which no transformation
+ type has been specified.
+
+ The entry point (``depends_on``) will be outside of this class and point to a
+ field in here. Following the chain may also require following ``depends_on``
+ links to transformations outside, for example to a common base table. If
+ a relative path is given, it is relative to the group enclosing the ``depends_on``
+ specification.
+
+ For a chain of three transformations, where :math:`T_1` depends on :math:`T_2`
+ and that in turn depends on :math:`T_3`, the final transformation :math:`T_f` is
+
+ .. math:: T_f = T_3 T_2 T_1
+
+ In explicit terms, the transformations are a subset of affine transformations
+ expressed as 4x4 matrices that act on homogeneous coordinates, :math:`w=(x,y,z,1)^T`.
+
+ For rotation and translation,
+
+ .. math:: T_r &= \begin{pmatrix} R & o \\ 0_3 & 1 \end{pmatrix} \\ T_t &= \begin{pmatrix} I_3 & t + o \\ 0_3 & 1 \end{pmatrix}
+
+ where :math:`R` is the usual 3x3 rotation matrix, :math:`o` is an offset vector,
+ :math:`0_3` is a row of 3 zeros, :math:`I_3` is the 3x3 identity matrix and
+ :math:`t` is the translation vector.
+
+ :math:`o` is given by the ``offset`` attribute, :math:`t` is given by the ``vector``
+ attribute multiplied by the field value, and :math:`R` is defined as a rotation
+ about an axis in the direction of ``vector``, of angle of the field value.
+
+ NOTE
+
+ One possible use of ``NXtransformations`` is to define the motors and
+ transformations for a diffractometer (goniometer). Such use is mentioned
+ in the ``NXinstrument`` base class. Use one ``NXtransformations`` group
+ for each diffractometer and name the group appropriate to the device.
+ Collecting the motors of a sample table or xyz-stage in an NXtransformations
+ group is equally possible.
+
+
+ Following the section on the general dscription of axis in NXtransformations is a section which
+ documents the fields commonly used within NeXus for positioning purposes and their meaning. Whenever
+ there is a need for positioning a beam line component please use the existing names. Use as many fields
+ as needed in order to position the component. Feel free to add more axis if required. In the description
+ given below, only those atttributes which are defined through the name are spcified. Add the other attributes
+ of the full set:
+
+ * vector
+ * offset
+ * transformation_type
+ * depends_on
+
+ as needed.
+
+DEBUG - ===== ATTRS (//entry/process/registration/tranformations@NX_class)
+DEBUG - value: NXtransformations
+DEBUG - classpath: ['NXentry', 'NXprocess', 'NXregistration', 'NXtransformations']
+DEBUG - classes:
+NXregistration.nxdl.xml:/TRANSFORMATIONS
+NXtransformations.nxdl.xml:
+DEBUG - @NX_class [NX_CHAR]
+DEBUG -
+DEBUG - ===== FIELD (//entry/process/registration/tranformations/rot_z):
+DEBUG - value: -1.0
+DEBUG - classpath: ['NXentry', 'NXprocess', 'NXregistration', 'NXtransformations', 'NX_NUMBER']
+DEBUG - classes:
+NXtransformations.nxdl.xml:/AXISNAME
+DEBUG - <>
+DEBUG - documentation (NXtransformations.nxdl.xml:/AXISNAME):
+DEBUG -
+ Units need to be appropriate for translation or rotation
+
+ The name of this field is not forced. The user is free to use any name
+ that does not cause confusion. When using more than one ``AXISNAME`` field,
+ make sure that each field name is unique in the same group, as required
+ by HDF5.
+
+ The values given should be the start points of exposures for the corresponding
+ frames. The end points should be given in ``AXISNAME_end``.
+
+DEBUG - ===== ATTRS (//entry/process/registration/tranformations/rot_z@depends_on)
+DEBUG - value: trans_y
+DEBUG - classpath: ['NXentry', 'NXprocess', 'NXregistration', 'NXtransformations', 'NX_NUMBER']
+DEBUG - classes:
+NXtransformations.nxdl.xml:/AXISNAME
+DEBUG - NXtransformations.nxdl.xml:/AXISNAME@depends_on - [NX_CHAR]
+DEBUG - <>
+DEBUG - documentation (NXtransformations.nxdl.xml:/AXISNAME/depends_on):
+DEBUG -
+ Points to the path to a field defining the axis on which this
+ depends or the string ".".
+
+DEBUG - ===== ATTRS (//entry/process/registration/tranformations/rot_z@offset)
+DEBUG - value: [256. 256. 0.]
+DEBUG - classpath: ['NXentry', 'NXprocess', 'NXregistration', 'NXtransformations', 'NX_NUMBER']
+DEBUG - classes:
+NXtransformations.nxdl.xml:/AXISNAME
+DEBUG - NXtransformations.nxdl.xml:/AXISNAME@offset - [NX_NUMBER]
+DEBUG - <>
+DEBUG - documentation (NXtransformations.nxdl.xml:/AXISNAME/offset):
+DEBUG -
+ A fixed offset applied before the transformation (three vector components).
+ This is not intended to be a substitute for a fixed ``translation`` axis but, for example,
+ as the mechanical offset from mounting the axis to its dependency.
+
+DEBUG - ===== ATTRS (//entry/process/registration/tranformations/rot_z@transformation_type)
+DEBUG - value: rotation
+DEBUG - classpath: ['NXentry', 'NXprocess', 'NXregistration', 'NXtransformations', 'NX_NUMBER']
+DEBUG - classes:
+NXtransformations.nxdl.xml:/AXISNAME
+DEBUG - NXtransformations.nxdl.xml:/AXISNAME@transformation_type - [NX_CHAR]
+DEBUG - <>
+DEBUG - enumeration (NXtransformations.nxdl.xml:/AXISNAME/transformation_type):
+DEBUG - -> translation
+DEBUG - -> rotation
+DEBUG - documentation (NXtransformations.nxdl.xml:/AXISNAME/transformation_type):
+DEBUG -
+ The transformation_type may be ``translation``, in which case the
+ values are linear displacements along the axis, ``rotation``,
+ in which case the values are angular rotations around the axis.
+
+ If this attribute is omitted, this is an axis for which there
+ is no motion to be specifies, such as the direction of gravity,
+ or the direction to the source, or a basis vector of a
+ coordinate frame.
+
+DEBUG - ===== ATTRS (//entry/process/registration/tranformations/rot_z@units)
+DEBUG - value: degrees
+DEBUG - classpath: ['NXentry', 'NXprocess', 'NXregistration', 'NXtransformations', 'NX_NUMBER']
+DEBUG - classes:
+NXtransformations.nxdl.xml:/AXISNAME
+DEBUG - NXtransformations.nxdl.xml:/AXISNAME@units [NX_TRANSFORMATION]
+DEBUG - ===== ATTRS (//entry/process/registration/tranformations/rot_z@vector)
+DEBUG - value: [0. 0. 1.]
+DEBUG - classpath: ['NXentry', 'NXprocess', 'NXregistration', 'NXtransformations', 'NX_NUMBER']
+DEBUG - classes:
+NXtransformations.nxdl.xml:/AXISNAME
+DEBUG - NXtransformations.nxdl.xml:/AXISNAME@vector - [NX_NUMBER]
+DEBUG - <>
+DEBUG - documentation (NXtransformations.nxdl.xml:/AXISNAME/vector):
+DEBUG -
+ Three values that define the axis for this transformation.
+ The axis should be normalized to unit length, making it
+ dimensionless. For ``rotation`` axes, the direction should be
+ chosen for a right-handed rotation with increasing angle.
+ For ``translation`` axes the direction should be chosen for
+ increasing displacement. For general axes, an appropriate direction
+ should be chosen.
+
+DEBUG - ===== FIELD (//entry/process/registration/tranformations/trans_x):
+DEBUG - value: 43.0
+DEBUG - classpath: ['NXentry', 'NXprocess', 'NXregistration', 'NXtransformations', 'NX_NUMBER']
+DEBUG - classes:
+NXtransformations.nxdl.xml:/AXISNAME
+DEBUG - <>
+DEBUG - documentation (NXtransformations.nxdl.xml:/AXISNAME):
+DEBUG -
+ Units need to be appropriate for translation or rotation
+
+ The name of this field is not forced. The user is free to use any name
+ that does not cause confusion. When using more than one ``AXISNAME`` field,
+ make sure that each field name is unique in the same group, as required
+ by HDF5.
+
+ The values given should be the start points of exposures for the corresponding
+ frames. The end points should be given in ``AXISNAME_end``.
+
+DEBUG - ===== ATTRS (//entry/process/registration/tranformations/trans_x@depends_on)
+DEBUG - value: .
+DEBUG - classpath: ['NXentry', 'NXprocess', 'NXregistration', 'NXtransformations', 'NX_NUMBER']
+DEBUG - classes:
+NXtransformations.nxdl.xml:/AXISNAME
+DEBUG - NXtransformations.nxdl.xml:/AXISNAME@depends_on - [NX_CHAR]
+DEBUG - <>
+DEBUG - documentation (NXtransformations.nxdl.xml:/AXISNAME/depends_on):
+DEBUG -
+ Points to the path to a field defining the axis on which this
+ depends or the string ".".
+
+DEBUG - ===== ATTRS (//entry/process/registration/tranformations/trans_x@transformation_type)
+DEBUG - value: translation
+DEBUG - classpath: ['NXentry', 'NXprocess', 'NXregistration', 'NXtransformations', 'NX_NUMBER']
+DEBUG - classes:
+NXtransformations.nxdl.xml:/AXISNAME
+DEBUG - NXtransformations.nxdl.xml:/AXISNAME@transformation_type - [NX_CHAR]
+DEBUG - <>
+DEBUG - enumeration (NXtransformations.nxdl.xml:/AXISNAME/transformation_type):
+DEBUG - -> translation
+DEBUG - -> rotation
+DEBUG - documentation (NXtransformations.nxdl.xml:/AXISNAME/transformation_type):
DEBUG -
+ The transformation_type may be ``translation``, in which case the
+ values are linear displacements along the axis, ``rotation``,
+ in which case the values are angular rotations around the axis.
+
+ If this attribute is omitted, this is an axis for which there
+ is no motion to be specifies, such as the direction of gravity,
+ or the direction to the source, or a basis vector of a
+ coordinate frame.
+
DEBUG - ===== ATTRS (//entry/process/registration/tranformations/trans_x@units)
DEBUG - value: pixels
-DEBUG - classpath: ['NXentry', 'NXprocess']
-DEBUG - NOT IN SCHEMA
-DEBUG -
+DEBUG - classpath: ['NXentry', 'NXprocess', 'NXregistration', 'NXtransformations', 'NX_NUMBER']
+DEBUG - classes:
+NXtransformations.nxdl.xml:/AXISNAME
+DEBUG - NXtransformations.nxdl.xml:/AXISNAME@units [NX_TRANSFORMATION]
DEBUG - ===== ATTRS (//entry/process/registration/tranformations/trans_x@vector)
DEBUG - value: [1. 0. 0.]
-DEBUG - classpath: ['NXentry', 'NXprocess']
-DEBUG - NOT IN SCHEMA
+DEBUG - classpath: ['NXentry', 'NXprocess', 'NXregistration', 'NXtransformations', 'NX_NUMBER']
+DEBUG - classes:
+NXtransformations.nxdl.xml:/AXISNAME
+DEBUG - NXtransformations.nxdl.xml:/AXISNAME@vector - [NX_NUMBER]
+DEBUG - <>
+DEBUG - documentation (NXtransformations.nxdl.xml:/AXISNAME/vector):
DEBUG -
+ Three values that define the axis for this transformation.
+ The axis should be normalized to unit length, making it
+ dimensionless. For ``rotation`` axes, the direction should be
+ chosen for a right-handed rotation with increasing angle.
+ For ``translation`` axes the direction should be chosen for
+ increasing displacement. For general axes, an appropriate direction
+ should be chosen.
+
DEBUG - ===== FIELD (//entry/process/registration/tranformations/trans_y):
DEBUG - value: 55.0
-DEBUG - classpath: ['NXentry', 'NXprocess']
-DEBUG - NOT IN SCHEMA
+DEBUG - classpath: ['NXentry', 'NXprocess', 'NXregistration', 'NXtransformations', 'NX_NUMBER']
+DEBUG - classes:
+NXtransformations.nxdl.xml:/AXISNAME
+DEBUG - <>
+DEBUG - documentation (NXtransformations.nxdl.xml:/AXISNAME):
DEBUG -
+ Units need to be appropriate for translation or rotation
+
+ The name of this field is not forced. The user is free to use any name
+ that does not cause confusion. When using more than one ``AXISNAME`` field,
+ make sure that each field name is unique in the same group, as required
+ by HDF5.
+
+ The values given should be the start points of exposures for the corresponding
+ frames. The end points should be given in ``AXISNAME_end``.
+
DEBUG - ===== ATTRS (//entry/process/registration/tranformations/trans_y@depends_on)
DEBUG - value: trans_x
-DEBUG - classpath: ['NXentry', 'NXprocess']
-DEBUG - NOT IN SCHEMA
+DEBUG - classpath: ['NXentry', 'NXprocess', 'NXregistration', 'NXtransformations', 'NX_NUMBER']
+DEBUG - classes:
+NXtransformations.nxdl.xml:/AXISNAME
+DEBUG - NXtransformations.nxdl.xml:/AXISNAME@depends_on - [NX_CHAR]
+DEBUG - <>
+DEBUG - documentation (NXtransformations.nxdl.xml:/AXISNAME/depends_on):
DEBUG -
+ Points to the path to a field defining the axis on which this
+ depends or the string ".".
+
DEBUG - ===== ATTRS (//entry/process/registration/tranformations/trans_y@transformation_type)
DEBUG - value: translation
-DEBUG - classpath: ['NXentry', 'NXprocess']
-DEBUG - NOT IN SCHEMA
+DEBUG - classpath: ['NXentry', 'NXprocess', 'NXregistration', 'NXtransformations', 'NX_NUMBER']
+DEBUG - classes:
+NXtransformations.nxdl.xml:/AXISNAME
+DEBUG - NXtransformations.nxdl.xml:/AXISNAME@transformation_type - [NX_CHAR]
+DEBUG - <>
+DEBUG - enumeration (NXtransformations.nxdl.xml:/AXISNAME/transformation_type):
+DEBUG - -> translation
+DEBUG - -> rotation
+DEBUG - documentation (NXtransformations.nxdl.xml:/AXISNAME/transformation_type):
DEBUG -
+ The transformation_type may be ``translation``, in which case the
+ values are linear displacements along the axis, ``rotation``,
+ in which case the values are angular rotations around the axis.
+
+ If this attribute is omitted, this is an axis for which there
+ is no motion to be specifies, such as the direction of gravity,
+ or the direction to the source, or a basis vector of a
+ coordinate frame.
+
DEBUG - ===== ATTRS (//entry/process/registration/tranformations/trans_y@units)
DEBUG - value: pixels
-DEBUG - classpath: ['NXentry', 'NXprocess']
-DEBUG - NOT IN SCHEMA
-DEBUG -
+DEBUG - classpath: ['NXentry', 'NXprocess', 'NXregistration', 'NXtransformations', 'NX_NUMBER']
+DEBUG - classes:
+NXtransformations.nxdl.xml:/AXISNAME
+DEBUG - NXtransformations.nxdl.xml:/AXISNAME@units [NX_TRANSFORMATION]
DEBUG - ===== ATTRS (//entry/process/registration/tranformations/trans_y@vector)
DEBUG - value: [0. 1. 0.]
-DEBUG - classpath: ['NXentry', 'NXprocess']
-DEBUG - NOT IN SCHEMA
+DEBUG - classpath: ['NXentry', 'NXprocess', 'NXregistration', 'NXtransformations', 'NX_NUMBER']
+DEBUG - classes:
+NXtransformations.nxdl.xml:/AXISNAME
+DEBUG - NXtransformations.nxdl.xml:/AXISNAME@vector - [NX_NUMBER]
+DEBUG - <>
+DEBUG - documentation (NXtransformations.nxdl.xml:/AXISNAME/vector):
DEBUG -
+ Three values that define the axis for this transformation.
+ The axis should be normalized to unit length, making it
+ dimensionless. For ``rotation`` axes, the direction should be
+ chosen for a right-handed rotation with increasing angle.
+ For ``translation`` axes the direction should be chosen for
+ increasing displacement. For general axes, an appropriate direction
+ should be chosen.
+
DEBUG - ===== GROUP (//entry/sample [NXmpes::/NXentry/NXsample]):
DEBUG - classpath: ['NXentry', 'NXsample']
DEBUG - classes:
@@ -3455,12 +4228,12 @@ DEBUG - documentation (NXentry.nxdl.xml:/SAMPLE):
DEBUG -
DEBUG - documentation (NXsample.nxdl.xml:):
DEBUG -
- Any information on the sample.
-
- This could include scanned variables that
- are associated with one of the data dimensions, e.g. the magnetic field, or
- logged data, e.g. monitored temperature vs elapsed time.
-
+ Any information on the sample.
+
+ This could include scanned variables that
+ are associated with one of the data dimensions, e.g. the magnetic field, or
+ logged data, e.g. monitored temperature vs elapsed time.
+
DEBUG - ===== ATTRS (//entry/sample@NX_class)
DEBUG - value: NXsample
DEBUG - classpath: ['NXentry', 'NXsample']
@@ -3472,14 +4245,20 @@ DEBUG - @NX_class [NX_CHAR]
DEBUG -
DEBUG - ===== FIELD (//entry/sample/bias):
DEBUG - value: 17.799719004221362
-DEBUG - classpath: ['NXentry', 'NXsample']
-DEBUG - NOT IN SCHEMA
+DEBUG - classpath: ['NXentry', 'NXsample', 'NX_FLOAT']
+DEBUG - classes:
+NXmpes.nxdl.xml:/ENTRY/SAMPLE/bias
+DEBUG - <>
+DEBUG - documentation (NXmpes.nxdl.xml:/ENTRY/SAMPLE/bias):
DEBUG -
+ Voltage applied to sample and sample holder.
+
DEBUG - ===== ATTRS (//entry/sample/bias@units)
DEBUG - value: V
-DEBUG - classpath: ['NXentry', 'NXsample']
-DEBUG - NOT IN SCHEMA
-DEBUG -
+DEBUG - classpath: ['NXentry', 'NXsample', 'NX_FLOAT']
+DEBUG - classes:
+NXmpes.nxdl.xml:/ENTRY/SAMPLE/bias
+DEBUG - NXmpes.nxdl.xml:/ENTRY/SAMPLE/bias@units [NX_VOLTAGE]
DEBUG - ===== FIELD (//entry/sample/chemical_formula):
DEBUG - value: b'MoTe2'
DEBUG - classpath: ['NXentry', 'NXsample', 'NX_CHAR']
@@ -3494,25 +4273,25 @@ DEBUG -
DEBUG - documentation (NXsample.nxdl.xml:/chemical_formula):
DEBUG -
- The chemical formula specified using CIF conventions.
- Abbreviated version of CIF standard:
-
- * Only recognized element symbols may be used.
- * Each element symbol is followed by a 'count' number. A count of '1' may be omitted.
- * A space or parenthesis must separate each cluster of (element symbol + count).
- * Where a group of elements is enclosed in parentheses, the multiplier for the
- group must follow the closing parentheses. That is, all element and group
- multipliers are assumed to be printed as subscripted numbers.
- * Unless the elements are ordered in a manner that corresponds to their chemical
- structure, the order of the elements within any group or moiety depends on
- whether or not carbon is present.
- * If carbon is present, the order should be:
-
- - C, then H, then the other elements in alphabetical order of their symbol.
- - If carbon is not present, the elements are listed purely in alphabetic order of their symbol.
-
- * This is the *Hill* system used by Chemical Abstracts.
-
+ The chemical formula specified using CIF conventions.
+ Abbreviated version of CIF standard:
+
+ * Only recognized element symbols may be used.
+ * Each element symbol is followed by a 'count' number. A count of '1' may be omitted.
+ * A space or parenthesis must separate each cluster of (element symbol + count).
+ * Where a group of elements is enclosed in parentheses, the multiplier for the
+ group must follow the closing parentheses. That is, all element and group
+ multipliers are assumed to be printed as subscripted numbers.
+ * Unless the elements are ordered in a manner that corresponds to their chemical
+ structure, the order of the elements within any group or moiety depends on
+ whether or not carbon is present.
+ * If carbon is present, the order should be:
+
+ - C, then H, then the other elements in alphabetical order of their symbol.
+ - If carbon is not present, the elements are listed purely in alphabetic order of their symbol.
+
+ * This is the *Hill* system used by Chemical Abstracts.
+
DEBUG - ===== FIELD (//entry/sample/depends_on):
DEBUG - value: b'/entry/sample/transformations/corrected_phi'
DEBUG - classpath: ['NXentry', 'NXsample', 'NX_CHAR']
@@ -3521,12 +4300,12 @@ NXsample.nxdl.xml:/depends_on
DEBUG - <>
DEBUG - documentation (NXsample.nxdl.xml:/depends_on):
DEBUG -
- NeXus positions components by applying a set of translations and rotations
- to apply to the component starting from 0, 0, 0. The order of these operations
- is critical and forms what NeXus calls a dependency chain. The depends_on
- field defines the path to the top most operation of the dependency chain or the
- string "." if located in the origin. Usually these operations are stored in a
- NXtransformations group. But NeXus allows them to be stored anywhere.
+ NeXus positions components by applying a set of translations and rotations
+ to apply to the component starting from 0, 0, 0. The order of these operations
+ is critical and forms what NeXus calls a dependency chain. The depends_on
+ field defines the path to the top most operation of the dependency chain or the
+ string "." if located in the origin. Usually these operations are stored in a
+ NXtransformations group. But NeXus allows them to be stored anywhere.
DEBUG - ===== FIELD (//entry/sample/description):