Skip to content

Commit

Permalink
Remove object_type key
Browse files Browse the repository at this point in the history
  • Loading branch information
franzpoeschel committed Nov 30, 2018
1 parent b18a1e3 commit feabd6c
Show file tree
Hide file tree
Showing 4 changed files with 36 additions and 43 deletions.
20 changes: 7 additions & 13 deletions docs/source/backends/json.rst
Original file line number Diff line number Diff line change
Expand Up @@ -16,34 +16,28 @@ A JSON file uses the file ending ``.json``. The JSON backend is chosen by creati
a ``Series`` object with a filename that has this file ending.

The top-level JSON object is a group representing the openPMD root group ``"/"``.
Any **openPMD group** is represented in JSON as a JSON object with two keys:
Any **openPMD group** is represented in JSON as a JSON object with two reserved keys:

* ``object_type`` For groups, this key points to the string ``"group"``. This makes
it possible to distinguish groups from datasets.
* ``attributes``: Attributes associated with the group. This key may be null or not be present
at all, thus indicating a group without attributes.
* ``platform_byte_widths`` (root group only): Byte widths specific to the writing platform.
Will be overwritten every time that a JSON value is stored to disk, hence this information
is only available about the last platform writing the JSON value.

All datasets and subgroups contained in this group are represented as a further key of
the group object. ``object_type``, ``attributes`` and ``platform_byte_widths`` have
the group object. ``attributes`` and ``platform_byte_widths`` have
hence the character of reserved keywords and cannot be used for group and dataset names
when working with the JSON backend.
Datasets and groups have the same namespace, meaning that there may not be a subgroup
and a dataset with the same name contained in one group.

Additionally to the two mentioned keys, the top-level group stores information about
the byte widths specific to the writing platform behind the key ``platform_byte_widths``.
Will be overwritten every time that a JSON value is stored to disk, hence this information
is only available about the last platform writing the JSON value.
Any **openPMD dataset** is a JSON object with three keys:

Any **openPMD dataset** is a JSON object with five keys:

* ``object_type`` For datasets, this key points to the string ``"dataset"``. This makes
it possible to distinguish datasets from groups.
* ``attributes``: Attributes associated with the dataset. May be ``null`` or not present if no attributes are associated with the dataset.
* ``datatype``: A string describing the type of the stored data.
* ``extent``: A JSON array describing the extent of the dataset in every dimension.
* ``data`` A nested array storing the actual data in row-major manner.
The data needs to be consistent with the fields ``datatype`` and ``extent``.
Checking whether this key points to an array can be (and is internally) used to distinguish groups from datasets.

**Attributes** are stored as a JSON object with a key for each attribute.
Every such attribute is itself a JSON object with two keys:
Expand Down
14 changes: 3 additions & 11 deletions docs/source/backends/json_example.json
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,6 @@
}
},
"meshes": {
"object_type": "group",
"rho": {
"attributes": {
"axisLabels": {
Expand Down Expand Up @@ -119,17 +118,10 @@
8
]
],
"datatype": "DOUBLE",
"extent": [
3,
3
],
"object_type": "dataset"
"datatype": "DOUBLE"
}
},
"object_type": "group"
},
"object_type": "group"
}
}
},
"platform_byte_widths": {
"BOOL": 1,
Expand Down
5 changes: 3 additions & 2 deletions include/openPMD/IO/JSON/JSONIOHandlerImpl.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -404,9 +404,10 @@ namespace openPMD
File
);

static bool isGroup( nlohmann::json const & j );
// need to check the name too in order to exclude "attributes" key
static bool isGroup( nlohmann::json::const_iterator it );

static bool isDataset( nlohmann::json const & j );
static bool isDataset( nlohmann::json const & j );


// check whether the json reference contains a valid dataset
Expand Down
40 changes: 23 additions & 17 deletions src/IO/JSON/JSONIOHandlerImpl.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -214,7 +214,6 @@ namespace openPMD
);
auto & dset = jsonVal[name];
dset["datatype"] = datatypeToString( parameter.dtype );
dset["object_type"] = "dataset";
dset["data"] = initializeNDArray( parameter.extent );
writable->written = true;
m_dirty.emplace( file );
Expand Down Expand Up @@ -719,7 +718,7 @@ namespace openPMD
->clear( );
for( auto it = j.begin( ); it != j.end( ); it++ )
{
if( isGroup( it.value( ) ) )
if( isGroup( it ) )
{
parameters.paths
->push_back( it.key( ) );
Expand All @@ -742,7 +741,7 @@ namespace openPMD
->clear( );
for( auto it = j.begin( ); it != j.end( ); it++ )
{
if( isDataset( it.value( ) ) )
if( isDataset( it.value() ) )
{
parameters.datasets
->push_back( it.key( ) );
Expand Down Expand Up @@ -918,17 +917,20 @@ namespace openPMD
{
// idea: begin from the innermost shale and copy the result into the
// outer shales
nlohmann::json accum; // null
nlohmann::json accum;
nlohmann::json old;
auto * accum_ptr = & accum;
auto * old_ptr = & old;
for( auto it = extent.rbegin( ); it != extent.rend( ); it++ )
{
nlohmann::json old = accum;
accum = nlohmann::json {};
std::swap(old_ptr, accum_ptr);
*accum_ptr = nlohmann::json {};
for( Extent::value_type i = 0; i < *it; i++ )
{
accum[i] = old; // copy boi
(*accum_ptr)[i] = *old_ptr; // copy boi
}
}
return accum;
return *accum_ptr;
}


Expand Down Expand Up @@ -994,11 +996,14 @@ namespace openPMD
);
for( std::string & group: groups )
{
// This also enforces a JSON object
// Enforce a JSON object
// the library will automatically create a list if the first
// key added to it is parseable as an int
jsonp = &( *jsonp )[group];
( *jsonp )["object_type"] = "group";
if (jsonp->is_null())
{
*jsonp = nlohmann::json::object();
}
}
}

Expand Down Expand Up @@ -1231,19 +1236,20 @@ namespace openPMD
{
return false;
}
auto it = j.find( "object_type" );
return it != j.end( ) && it.value( ) == "dataset";
auto i = j.find( "data" );
return i != j.end( ) && i.value( ).is_array();
}


bool JSONIOHandlerImpl::isGroup( nlohmann::json const & j )
bool JSONIOHandlerImpl::isGroup( nlohmann::json::const_iterator it )
{
if( !j.is_object( ) )
auto & j = it.value();
if( it.key() == "attributes" || it.key() == "platform_byte_widths" || !j.is_object( ) )
{
return false;
}
auto it = j.find( "object_type" );
return it != j.end( ) && it.value( ) == "group";
auto i = j.find( "data" );
return i == j.end( ) || !i.value( ).is_array();
}


Expand All @@ -1253,7 +1259,7 @@ namespace openPMD
nlohmann::json & j
)
{
VERIFY_ALWAYS( j["object_type"] == "dataset",
VERIFY_ALWAYS( isDataset(j),
"Specified dataset does not exist or is not a dataset." );

try
Expand Down

0 comments on commit feabd6c

Please sign in to comment.