You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Database is HANA 2, on-prem.
The id column is created as VARCHAR(36).
As the ids are generated by Hibernate, it sometimes fails, with exception SAP DBTech JDBC: [274]: inserted value too large for column: Failed in "ID" column with the value .... .
As a workaround, I created a custom UUIDType with higher priority which maps UUID to VARBINARY(16).
Shouldn't this be also the mapping in this library, as the core or other dialects map it to either a UUID type (core, H2, Postgres)?
Note: I know that I could put there VARBINARY(16) as a type in the changeSet, but we're using H2 and Postgres in parallel and it would have invalidated the common changelog hashes.
Hi,
I'm having issues with the mapping of UUID to the native .
My changeset is something like this :
Database is HANA 2, on-prem.
The id column is created as VARCHAR(36).
As the ids are generated by Hibernate, it sometimes fails, with exception
SAP DBTech JDBC: [274]: inserted value too large for column: Failed in "ID" column with the value ....
.As a workaround, I created a custom UUIDType with higher priority which maps UUID to VARBINARY(16).
Shouldn't this be also the mapping in this library, as the core or other dialects map it to either a UUID type (core, H2, Postgres)?
Note: I know that I could put there VARBINARY(16) as a type in the changeSet, but we're using H2 and Postgres in parallel and it would have invalidated the common changelog hashes.
Kind regards,
Radu
┆Issue is synchronized with this Jira Bug by Unito
The text was updated successfully, but these errors were encountered: