You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Since we need to support data sync with every new release of the data sets, we have some issues in tracking veteran records - there is no ID field for veterans or cemeteries, so we are using a combination of fields as an ID. right now those fields are {firstName-lastName-birthDate-deathDate-cemeteryID} as veteran ID, with Cemetery ID: {cemeteryName-address-zip}.
This ensures we populate the database only with complete data. If we don't use birth/death date or cemetery id as keys, we get a lot of duplicate records.
On the other hand, this does mean we will miss a lot of records that have partial data.
Is there anything that can be done about the data source (like having record ids)? @kbowerma
The text was updated successfully, but these errors were encountered:
Yeah, I think we need to figure out a way to create a key from the row, to ensure uniqueness as we insert. The problem with that is that if a record is edited from the source, it will be a new key.
Since we need to support data sync with every new release of the data sets, we have some issues in tracking veteran records - there is no ID field for veterans or cemeteries, so we are using a combination of fields as an ID. right now those fields are {firstName-lastName-birthDate-deathDate-cemeteryID} as veteran ID, with Cemetery ID: {cemeteryName-address-zip}.
This ensures we populate the database only with complete data. If we don't use birth/death date or cemetery id as keys, we get a lot of duplicate records.
On the other hand, this does mean we will miss a lot of records that have partial data.
Is there anything that can be done about the data source (like having record ids)? @kbowerma
The text was updated successfully, but these errors were encountered: