Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[CH] Insert wrong result for struct field #6588

Closed
exmy opened this issue Jul 25, 2024 · 9 comments · Fixed by #6691
Closed

[CH] Insert wrong result for struct field #6588

exmy opened this issue Jul 25, 2024 · 9 comments · Fixed by #6691
Assignees
Labels
bug Something isn't working triage

Comments

@exmy
Copy link
Contributor

exmy commented Jul 25, 2024

Backend

CH (ClickHouse)

Bug description

create table tbl (
 a int,
 b map<string, string>,
 c struct<d:string, e:string>
 ) partitioned by (day string)
 stored as orc;

insert overwrite tbl partition (day)
select id as a,
       str_to_map(concat('t1:','a','&t2:','b'),'&',':'),
       struct('1', null) as c,
       '2024-01-08' as day
from range(1);

select * from tbl;

expected result:

0	{"t1":"a","t2":"b"}	{"d":"1","e":null}	2024-01-08

gluten result:

0	{"t1":"a","t2":"b"}	{"d":null,"e":null}	2024-01-08

Spark version

None

Spark configurations

No response

System information

No response

Relevant logs

No response

@exmy exmy added bug Something isn't working triage labels Jul 25, 2024
@exmy
Copy link
Contributor Author

exmy commented Jul 25, 2024

Seems not related with #4317. I reverted the pr code and the issue still can be reproduced.

@baibaichen
Copy link
Contributor

@exmy
In spark 3.5, i use this sql, it looks work, could you try this sql in 3.3?

insert overwrite tbl partition (day)
select id as a,
       map('t1', 'a', 't2', 'b'),
       struct('1', null) as c,
       '2024-01-08' as day
from range(10)

@exmy
Copy link
Contributor Author

exmy commented Jul 25, 2024

@exmy In spark 3.5, i use this sql, it looks work, could you try this sql in 3.3?

insert overwrite tbl partition (day)
select id as a,
       map('t1', 'a', 't2', 'b'),
       struct('1', null) as c,
       '2024-01-08' as day
from range(10)

in spark3.3, it still can be reproduced.

@taiyang-li
Copy link
Contributor

taiyang-li commented Jul 29, 2024

Can't be reproduced in spark 3.5 with this branch: #6601

image

@taiyang-li
Copy link
Contributor

Let's dig into it.

@taiyang-li
Copy link
Contributor

taiyang-li commented Jul 30, 2024

It can be reproduced with below spark configs

SET spark.sql.catalogImplementation = 'hive';
SET spark.sql.files.maxPartitionBytes = 1g;
SET spark.serializer = 'org.apache.spark.serializer.JavaSerializer';
SET spark.sql.shuffle.partitions = 5;
SET spark.sql.adaptive.enabled = true;
SET spark.sql.files.minPartitionNum = 1;
SET spark.databricks.delta.maxSnapshotLineageLength = 20;
SET spark.databricks.delta.snapshotPartitions = 1;
SET spark.databricks.delta.properties.defaults.checkpointInterval = 5;
SET spark.databricks.delta.stalenessLimit = 3600000;
SET spark.gluten.sql.columnar.columnartorow = true;
SET spark.gluten.sql.columnar.backend.ch.worker.id = '1';
SET spark.gluten.sql.columnar.iterator = true;
SET spark.gluten.sql.columnar.hashagg.enablefinal = true;
SET spark.gluten.sql.enable.native.validation = false;
SET spark.sql.storeAssignmentPolicy = LEGACY; 
SET spark.gluten.sql.columnar.backend.ch.runtime_config.logger.level = 'debug';
0: jdbc:hive2://localhost:10000/> select * from tbl; 
+----+----------------------+-------+-------------+
| a  |          b           |   c   |     day     |
+----+----------------------+-------+-------------+
| 0  | {"t1":"a","t2":"b"}  | NULL  | 2024-01-08  |
+----+----------------------+-------+-------------+
1 row selected (0.279 seconds)
0: jdbc:hive2://localhost:10000/> 
0: jdbc:hive2://localhost:10000/> 
0: jdbc:hive2://localhost:10000/> set spark.gluten.enabled = false; 
+-----------------------+--------+
|          key          | value  |
+-----------------------+--------+
| spark.gluten.enabled  | false  |
+-----------------------+--------+
1 row selected (0.026 seconds)
0: jdbc:hive2://localhost:10000/> 
0: jdbc:hive2://localhost:10000/> select * from tbl; 
+----+----------------------+----------------------+-------------+
| a  |          b           |          c           |     day     |
+----+----------------------+----------------------+-------------+
| 0  | {"t1":"a","t2":"b"}  | {"d":null,"e":null}  | 2024-01-08  |
+----+----------------------+----------------------+-------------+
1 row selected (0.541 seconds)

file contents:

$ orc-contents ./part-00000-e89c5aef-d240-4d78-80a1-d04ebaf5868d.c000.lz4.orc          
{"a": 0, "b": [{"key": "t1", "value": "a"}, {"key": "t2", "value": "b"}], "c": {"1": "1", "2": null}}

@taiyang-li
Copy link
Contributor

taiyang-li commented Jul 30, 2024

线索1:native和非native insert时,生成的orc文件schema有diff

native insert

$ orc-contents ./part-00000-8d8efdee-03ff-49e4-81fd-536854c0894d.c000.lz4.orc 
{"a": 0, "b": [{"key": "t1", "value": "a"}, {"key": "t2", "value": "b"}], "c": {"d": "1", "e": null}}

非native insert

$ orc-contents ./part-00000-e89c5aef-d240-4d78-80a1-d04ebaf5868d.c000.lz4.orc 
{"a": 0, "b": [{"key": "t1", "value": "a"}, {"key": "t2", "value": "b"}], "c": {"1": "1", "2": null}}

@taiyang-li
Copy link
Contributor

taiyang-li commented Jul 30, 2024

线索2

native insert with SET spark.sql.storeAssignmentPolicy = ANSI. Notice that ANSI is default value.

$ orc-contents ./part-00000-f7df5042-0bd8-4831-a40f-2b43d2309b3f.c000.lz4.orc 
{"a": 0, "b": [{"key": "t1", "value": "a"}, {"key": "t2", "value": "b"}], "c": {"d": "1", "e": null}}
0: jdbc:hive2://localhost:10000/> select * from tbl; 
+----+----------------------+---------------------+-------------+
| a  |          b           |          c          |     day     |
+----+----------------------+---------------------+-------------+
| 0  | {"t1":"a","t2":"b"}  | {"d":"1","e":null}  | 2024-01-08  |
+----+----------------------+---------------------+-------------+
1 row selected (0.284 seconds)

native insert with SET spark.sql.storeAssignmentPolicy = LEGACY

$ orc-contents ./part-00000-0c3bb661-97e9-4839-88ac-2ecb5d19a4e5.c000.lz4.orc 
{"a": 0, "b": [{"key": "t1", "value": "a"}, {"key": "t2", "value": "b"}], "c": {"1": "1", "2": null}}
0: jdbc:hive2://localhost:10000/> select * from tbl; 
+----+----------------------+-------+-------------+
| a  |          b           |   c   |     day     |
+----+----------------------+-------+-------------+
| 0  | {"t1":"a","t2":"b"}  | NULL  | 2024-01-08  |
+----+----------------------+-------+-------------+
1 row selected (0.899 seconds)

@taiyang-li
Copy link
Contributor

taiyang-li commented Aug 2, 2024

原因:

  1. insert阶段
    SET spark.sql.storeAssignmentPolicy = LEGACY时,insert insert算子,对orc格式就是ORCBlockOutputFormat算子, 是以FakeRow输出CH Block作为其header的。导致ORCBlockOutputFormat生成orc文件在类型上和建表sql有diff。

old_header对应FakeRow输出的CH Block的schema。其中字段c的类型是Tuple(String, Nullable(String))
new_header对应表元数据中的schema。其中字段c类型是Tuple(d Nullable(String), e Nullable(String))

old_header:a#27 Int32 Int32(size = 0), b#28 Map(String, String) Map(size = 0, Array(size = 0, UInt64(size = 0), Tuple(size = 0, String(size = 0), String(size = 0)))), c#29 Tuple(String, Nullable(String)) Tuple(size = 0, String(size = 0), Nullable(size = 0, String(size = 0), UInt8(size = 0)))
new_header:a Int32 Int32(size = 0), b Map(String, Nullable(String)) Map(size = 0, Array(size = 0, UInt64(size = 0), Tuple(size = 0, String(size = 0), Nullable(size = 0, String(size = 0), UInt8(size = 0))))), c Tuple(d Nullable(String), e Nullable(String)) Tuple(size = 0, Nullable(size = 0, String(size = 0), UInt8(size = 0)), Nullable(size = 0, String(size = 0), UInt8(size = 0)))
  1. select阶段
    在1的基础上,读取orc文件时,实际上是以Tuple(d Nullable(String), e Nullable(String))为目标类型读取文件中类型为Tuple(String, Nullable(String))的字段,导致读取不到正确数据。整体上表现为insert和select数据不一致。

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working triage
Projects
None yet
3 participants