Skip to content

Commit

Permalink
ontology to CSV converter
Browse files Browse the repository at this point in the history
  • Loading branch information
gnmerritt authored and ohshazbot committed Mar 28, 2024
1 parent 035dcf4 commit 165ba8e
Show file tree
Hide file tree
Showing 3 changed files with 74 additions and 0 deletions.
1 change: 1 addition & 0 deletions .python-version
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
3.11.1/envs/dbo
12 changes: 12 additions & 0 deletions BRANCH.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
For managing our fork, there are primarily 3 branches we will be working with.

1. upstream/master - This is the upstream package that we want to get all our public improvements in eventually
2. main - This is our internal shadow of upstream/master, it should be 1:1. This is what we should make branches off of for changes we want to go upstream
3. onboard-main - This is our final copy of digital buildings with all of the upstream changes, pending upstream changes, and any other changes that aren't compatible with the upstream/master

Workflow for making improvements:
1. Branch off of main
2. make changes
3. pull-request into upstream/master

Upon acceptance and merge, we will update our internal main and subsequently update our onboard-main. If there are changes we NEED in onboard-main, we can accelerate part of the cycle and pull-request into onboard-main at the same time as we PR into upstream/master. When it does get accepted, it should result in a noop merge and everything will be healthy again. But complications will arise if changes get made to the PR after the PR lands in onboard-main, hence this not being the standard workflow.
61 changes: 61 additions & 0 deletions tools/validators/instance_validator/ontology_dump.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
#!/usr/bin/env python3
import csv

from validate import generate_universe

universe = generate_universe.BuildUniverse()
types_by_ns = universe.entity_type_universe.type_namespaces_map
rows = []

for namespace, entities in types_by_ns.items():
print(f"\n-- working on namespace '{namespace}'--\n")

# if namespace == 'HVAC':
# __import__("IPython").embed()

for entity_name, entity in entities.valid_types_map.items():
print(f"Entity '{entity_name}'")

# if entity_name == 'AHU_BSPC_CO2M_DX4SC_ECON_EFSS_EFVSC_FDPM4X_HTSC_MOAFC_OAFC_SFSS_SFVSC_SSPC':
# __import__("IPython").embed()

entity_row = {
'guid': entity.guid,
'namespace': namespace,
'name': entity_name,
'is_canonical': entity.is_canonical,
'is_abstract': entity.is_abstract,
'description': entity.description,
'parents': "|".join([p.typename for p in entity.parent_names.values()]),
}

# many canonical types don't define any new fields, they mix-in existing abstract
# types e.g. AHU_BSPC_CO2M_DX4SC_ECON_EFSS_EFVSC_FDPM4X_HTSC_MOAFC_OAFC_SFSS_SFVSC_SSPC
if not entity.local_field_names:
canonical_row = {
**entity_row,
'dbo.point_type': None,
'field_optional': None,
}
rows.append(canonical_row)

for field_name, field in entity.local_field_names.items():
increment = field.field.increment
field_row = {
**entity_row,
'dbo.point_type': field.field.field,
'dbo.point_type_increment': increment,
'field_optional': field.optional,
}
rows.append(field_row)

if rows:
rows.sort(key=lambda r: r['guid'])

with open('flat_ontology.csv', 'w', newline='', encoding='utf-8') as f:
writer = csv.DictWriter(f, rows[0].keys(), delimiter=',', lineterminator='\n', quoting=csv.QUOTE_MINIMAL)
writer.writeheader()
writer.writerows(rows)
else:
sys.stderr.write(
"No rows generated, ensure the validator is installed & updated (tools/validators/ontology_validator/README.md)")

0 comments on commit 165ba8e

Please sign in to comment.