Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PNNX is an open standard for PyTorch model interoperability #3262

Merged
merged 173 commits into from
Nov 22, 2021

Conversation

nihui
Copy link
Member

@nihui nihui commented Oct 1, 2021

欢迎留言交流 w
welcome to leave your idea here

@iyangzy
Copy link

iyangzy commented Nov 22, 2021

前段时间在忙秋招的事,今天试了一下编译通过啦,比心比心
就是install的位置比较尴尬
-- Installing: /usr/local/bin/pnnx
况且我还有些强迫症。。。
希望nihui大大最终merge到master的可以解决一下

@nihui
Copy link
Member Author

nihui commented Nov 22, 2021

前段时间在忙秋招的事,今天试了一下编译通过啦,比心比心 就是install的位置比较尴尬 -- Installing: /usr/local/bin/pnnx 况且我还有些强迫症。。。 希望nihui大大最终merge到master的可以解决一下

cmake -DCMAKE_INSTALL_PREFIX=<your target dir> ..

@iyangzy
Copy link

iyangzy commented Nov 22, 2021

收到

@nihui nihui merged commit e4c821a into Tencent:master Nov 22, 2021
@zhu-zhaofei
Copy link

zhu-zhaofei commented Dec 19, 2021

大佬好,我尝试转换一个GAN模型,能生成pnnx.bin,pnnx.param,pnnx.py,debug.bin,debug.param,debug2.param.debug2.bin,但并不能生成ncnn.param和ncnn.bin。调试后发现main函数中,pnnx::pass_ncnn(pnnx_graph)失败,进入pass_ncnn.cpp中 模块ncnn::expand_expression(g)不能通过,进入ncnn::expand_expression(g)函数发现
Operand* old_output_operand = op->outputs[0];
Operand* new_output_operand = graph.get_operand(op->name + "" + [outname);中指针变量new_output_operand会出现nullptr,导致崩溃,麻烦大佬帮忙看下。多谢。
转换后的pnnx.param文件为:
**
7767517
676 682
pnnx.Input pnnx_input_0 0 1 0 #0=(1,3,512,512)f32
pnnx.Expression pnnx_expr_2438 0 1 1 expr=None
pnnx.Expression pnnx_expr_2436 0 1 2 expr=2.000000e-01
nn.Conv2d conv_body_first 1 1 0 3 bias=True dilation=(1,1) groups=1 in_channels=3 kernel_size=(1,1) out_channels=32 padding=(0,0) padding_mode=zeros stride=(1,1) @bias=(32)f32 @weight=(32,3,1,1)f32 #0=(1,3,512,512)f32 #3=(1,32,512,512)f32
aten::leaky_relu
pnnx_6 2 1 3 2 4 #3=(1,32,512,512)f32 #4=(1,32,512,512)f32
pnnx.Expression pnnx_expr_2431 0 1 5 expr=2.000000e-01
nn.Conv2d conv_body_down.0.conv1 1 1 4 6 bias=True dilation=(1,1) groups=1 in_channels=32 kernel_size=(3,3) out_channels=32 padding=(1,1) padding_mode=zeros stride=(1,1) @bias=(32)f32 @weight=(32,32,3,3)f32 #4=(1,32,512,512)f32 #6=(1,32,512,512)f32
aten::leaky_relu_ pnnx_12 2 1 6 5 7 #6=(1,32,512,512)f32 #7=(1,32,512,512)f32
F.upsample F.upsample_38 1 1 7 8 align_corners=False mode=bilinear scale_factor=(5.000000e-01,5.000000e-01) $input=7 #7=(1,32,512,512)f32 #8=(1,32,256,256)f32
nn.Conv2d conv_body_down.0.conv2 1 1 8 9 bias=True dilation=(1,1) groups=1 in_channels=32 kernel_size=(3,3) out_channels=64 padding=(1,1) padding_mode=zeros stride=(1,1) @bias=(64)f32 @weight=(64,32,3,3)f32 #8=(1,32,256,256)f32 #9=(1,64,256,256)f32
pnnx.Expression pnnx_expr_2427 0 1 10 expr=2.000000e-01
aten::leaky_relu_ pnnx_17 2 1 9 10 11 #9=(1,64,256,256)f32 #11=(1,64,256,256)f32
F.upsample F.upsample_39 1 1 4 12 align_corners=False mode=bilinear scale_factor=(5.000000e-01,5.000000e-01) $input=4 #4=(1,32,512,512)f32 #12=(1,32,256,256)f32
nn.Conv2d conv_body_down.0.skip 1 1 12 13 bias=False dilation=(1,1) groups=1 in_channels=32 kernel_size=(1,1) out_channels=64 padding=(0,0) padding_mode=zeros stride=(1,1) @weight=(64,32,1,1)f32 #12=(1,32,256,256)f32 #13=(1,64,256,256)f32
pnnx.Expression pnnx_expr_2423 2 1 11 13 14 expr=add(@0,@1) #11=(1,64,256,256)f32 #13=(1,64,256,256)f32 #14=(1,64,256,256)f32
pnnx.Expression pnnx_expr_2421 0 1 15 expr=2.000000e-01
nn.Conv2d conv_body_down.1.conv1 1 1 14 16 bias=True dilation=(1,1) groups=1 in_channels=64 kernel_size=(3,3) out_channels=64 padding=(1,1) padding_mode=zeros stride=(1,1) @bias=(64)f32 @weight=(64,64,3,3)f32 #14=(1,64,256,256)f32 #16=(1,64,256,256)f32
aten::leaky_relu_ pnnx_30 2 1 16 15 17 #16=(1,64,256,256)f32 #17=(1,64,256,256)f32
F.upsample F.upsample_40 1 1 17 18 align_corners=False mode=bilinear scale_factor=(5.000000e-01,5.000000e-01) $input=17 #17=(1,64,256,256)f32 #18=(1,64,128,128)f32
nn.Conv2d conv_body_down.1.conv2 1 1 18 19 bias=True dilation=(1,1) groups=1 in_channels=64 kernel_size=(3,3) out_channels=128 padding=(1,1) padding_mode=zeros stride=(1,1) @bias=(128)f32 @weight=(128,64,3,3)f32 #18=(1,64,128,128)f32 #19=(1,128,128,128)f32
pnnx.Expression pnnx_expr_2417 0 1 20 expr=2.000000e-01
aten::leaky_relu_ pnnx_35 2 1 19 20 21 #19=(1,128,128,128)f32 #21=(1,128,128,128)f32
F.upsample F.upsample_41 1 1 14 22 align_corners=False mode=bilinear scale_factor=(5.000000e-01,5.000000e-01) $input=14 #14=(1,64,256,256)f32 #22=(1,64,128,128)f32
nn.Conv2d conv_body_down.1.skip 1 1 22 23 bias=False dilation=(1,1) groups=1 in_channels=64 kernel_size=(1,1) out_channels=128 padding=(0,0) padding_mode=zeros stride=(1,1) @weight=(128,64,1,1)f32 #22=(1,64,128,128)f32 #23=(1,128,128,128)f32
pnnx.Expression pnnx_expr_2413 2 1 21 23 24 expr=add(@0,@1) #21=(1,128,128,128)f32 #23=(1,128,128,128)f32 #24=(1,128,128,128)f32
pnnx.Expression pnnx_expr_2411 0 1 25 expr=2.000000e-01
nn.Conv2d conv_body_down.2.conv1 1 1 24 26 bias=True dilation=(1,1) groups=1 in_channels=128 kernel_size=(3,3) out_channels=128 padding=(1,1) padding_mode=zeros stride=(1,1) @bias=(128)f32 @weight=(128,128,3,3)f32 #24=(1,128,128,128)f32 #26=(1,128,128,128)f32
aten::leaky_relu_ pnnx_48 2 1 26 25 27 #26=(1,128,128,128)f32 #27=(1,128,128,128)f32
F.upsample F.upsample_42 1 1 27 28 align_corners=False mode=bilinear scale_factor=(5.000000e-01,5.000000e-01) $input=27 #27=(1,128,128,128)f32 #28=(1,128,64,64)f32
nn.Conv2d conv_body_down.2.conv2 1 1 28 29 bias=True dilation=(1,1) groups=1 in_channels=128 kernel_size=(3,3) out_channels=256 padding=(1,1) padding_mode=zeros stride=(1,1) @bias=(256)f32 @weight=(256,128,3,3)f32 #28=(1,128,64,64)f32 #29=(1,256,64,64)f32
pnnx.Expression pnnx_expr_2407 0 1 30 expr=2.000000e-01
aten::leaky_relu_ pnnx_53 2 1 29 30 31 #29=(1,256,64,64)f32 #31=(1,256,64,64)f32
F.upsample F.upsample_43 1 1 24 32 align_corners=False mode=bilinear scale_factor=(5.000000e-01,5.000000e-01) $input=24 #24=(1,128,128,128)f32 #32=(1,128,64,64)f32
nn.Conv2d conv_body_down.2.skip 1 1 32 33 bias=False dilation=(1,1) groups=1 in_channels=128 kernel_size=(1,1) out_channels=256 padding=(0,0) padding_mode=zeros stride=(1,1) @weight=(256,128,1,1)f32 #32=(1,128,64,64)f32 #33=(1,256,64,64)f32
pnnx.Expression pnnx_expr_2403 2 1 31 33 34 expr=add(@0,@1) #31=(1,256,64,64)f32 #33=(1,256,64,64)f32 #34=(1,256,64,64)f32
pnnx.Expression pnnx_expr_2401 0 1 35 expr=2.000000e-01
nn.Conv2d conv_body_down.3.conv1 1 1 34 36 bias=True dilation=(1,1) groups=1 in_channels=256 kernel_size=(3,3) out_channels=256 padding=(1,1) padding_mode=zeros stride=(1,1) @bias=(256)f32 @weight=(256,256,3,3)f32 #34=(1,256,64,64)f32 #36=(1,256,64,64)f32
aten::leaky_relu_ pnnx_66 2 1 36 35 37 #36=(1,256,64,64)f32 #37=(1,256,64,64)f32
F.upsample F.upsample_44 1 1 37 38 align_corners=False mode=bilinear scale_factor=(5.000000e-01,5.000000e-01) $input=37 #37=(1,256,64,64)f32 #38=(1,256,32,32)f32
nn.Conv2d conv_body_down.3.conv2 1 1 38 39 bias=True dilation=(1,1) groups=1 in_channels=256 kernel_size=(3,3) out_channels=256 padding=(1,1) padding_mode=zeros stride=(1,1) @bias=(256)f32 @weight=(256,256,3,3)f32 #38=(1,256,32,32)f32 #39=(1,256,32,32)f32
pnnx.Expression pnnx_expr_2397 0 1 40 expr=2.000000e-01
aten::leaky_relu_ pnnx_71 2 1 39 40 41 #39=(1,256,32,32)f32 #41=(1,256,32,32)f32
F.upsample F.upsample_45 1 1 34 42 align_corners=False mode=bilinear scale_factor=(5.000000e-01,5.000000e-01) $input=34 #34=(1,256,64,64)f32 #42=(1,256,32,32)f32
nn.Conv2d conv_body_down.3.skip 1 1 42 43 bias=False dilation=(1,1) groups=1 in_channels=256 kernel_size=(1,1) out_channels=256 padding=(0,0) padding_mode=zeros stride=(1,1) @weight=(256,256,1,1)f32 #42=(1,256,32,32)f32 #43=(1,256,32,32)f32
pnnx.Expression pnnx_expr_2393 2 1 41 43 44 expr=add(@0,@1) #41=(1,256,32,32)f32 #43=(1,256,32,32)f32 #44=(1,256,32,32)f32
pnnx.Expression pnnx_expr_2391 0 1 45 expr=2.000000e-01
nn.Conv2d conv_body_down.4.conv1 1 1 44 46 bias=True dilation=(1,1) groups=1 in_channels=256 kernel_size=(3,3) out_channels=256 padding=(1,1) padding_mode=zeros stride=(1,1) @bias=(256)f32 @weight=(256,256,3,3)f32 #44=(1,256,32,32)f32 #46=(1,256,32,32)f32
aten::leaky_relu_ pnnx_84 2 1 46 45 47 #46=(1,256,32,32)f32 #47=(1,256,32,32)f32
F.upsample F.upsample_46 1 1 47 48 align_corners=False mode=bilinear scale_factor=(5.000000e-01,5.000000e-01) $input=47 #47=(1,256,32,32)f32 #48=(1,256,16,16)f32
nn.Conv2d conv_body_down.4.conv2 1 1 48 49 bias=True dilation=(1,1) groups=1 in_channels=256 kernel_size=(3,3) out_channels=256 padding=(1,1) padding_mode=zeros stride=(1,1) @bias=(256)f32 @weight=(256,256,3,3)f32 #48=(1,256,16,16)f32 #49=(1,256,16,16)f32
pnnx.Expression pnnx_expr_2387 0 1 50 expr=2.000000e-01
aten::leaky_relu_ pnnx_89 2 1 49 50 51 #49=(1,256,16,16)f32 #51=(1,256,16,16)f32
F.upsample F.upsample_47 1 1 44 52 align_corners=False mode=bilinear scale_factor=(5.000000e-01,5.000000e-01) $input=44 #44=(1,256,32,32)f32 #52=(1,256,16,16)f32
nn.Conv2d conv_body_down.4.skip 1 1 52 53 bias=False dilation=(1,1) groups=1 in_channels=256 kernel_size=(1,1) out_channels=256 padding=(0,0) padding_mode=zeros stride=(1,1) @weight=(256,256,1,1)f32 #52=(1,256,16,16)f32 #53=(1,256,16,16)f32
pnnx.Expression pnnx_expr_2383 2 1 51 53 54 expr=add(@0,@1) #51=(1,256,16,16)f32 #53=(1,256,16,16)f32 #54=(1,256,16,16)f32
pnnx.Expression pnnx_expr_2381 0 1 55 expr=2.000000e-01
nn.Conv2d conv_body_down.5.conv1 1 1 54 56 bias=True dilation=(1,1) groups=1 in_channels=256 kernel_size=(3,3) out_channels=256 padding=(1,1) padding_mode=zeros stride=(1,1) @bias=(256)f32 @weight=(256,256,3,3)f32 #54=(1,256,16,16)f32 #56=(1,256,16,16)f32
aten::leaky_relu_ pnnx_102 2 1 56 55 57 #56=(1,256,16,16)f32 #57=(1,256,16,16)f32
F.upsample F.upsample_48 1 1 57 58 align_corners=False mode=bilinear scale_factor=(5.000000e-01,5.000000e-01) $input=57 #57=(1,256,16,16)f32 #58=(1,256,8,8)f32
nn.Conv2d conv_body_down.5.conv2 1 1 58 59 bias=True dilation=(1,1) groups=1 in_channels=256 kernel_size=(3,3) out_channels=256 padding=(1,1) padding_mode=zeros stride=(1,1) @bias=(256)f32 @weight=(256,256,3,3)f32 #58=(1,256,8,8)f32 #59=(1,256,8,8)f32
pnnx.Expression pnnx_expr_2377 0 1 60 expr=2.000000e-01
aten::leaky_relu_ pnnx_107 2 1 59 60 61 #59=(1,256,8,8)f32 #61=(1,256,8,8)f32
F.upsample F.upsample_49 1 1 54 62 align_corners=False mode=bilinear scale_factor=(5.000000e-01,5.000000e-01) $input=54 #54=(1,256,16,16)f32 #62=(1,256,8,8)f32
nn.Conv2d conv_body_down.5.skip 1 1 62 63 bias=False dilation=(1,1) groups=1 in_channels=256 kernel_size=(1,1) out_channels=256 padding=(0,0) padding_mode=zeros stride=(1,1) @weight=(256,256,1,1)f32 #62=(1,256,8,8)f32 #63=(1,256,8,8)f32
pnnx.Expression pnnx_expr_2373 2 1 61 63 64 expr=add(@0,@1) #61=(1,256,8,8)f32 #63=(1,256,8,8)f32 #64=(1,256,8,8)f32
pnnx.Expression pnnx_expr_2371 0 1 65 expr=2.000000e-01
nn.Conv2d conv_body_down.6.conv1 1 1 64 66 bias=True dilation=(1,1) groups=1 in_channels=256 kernel_size=(3,3) out_channels=256 padding=(1,1) padding_mode=zeros stride=(1,1) @bias=(256)f32 @weight=(256,256,3,3)f32 #64=(1,256,8,8)f32 #66=(1,256,8,8)f32
aten::leaky_relu_ pnnx_120 2 1 66 65 67 #66=(1,256,8,8)f32 #67=(1,256,8,8)f32
F.upsample F.upsample_50 1 1 67 68 align_corners=False mode=bilinear scale_factor=(5.000000e-01,5.000000e-01) $input=67 #67=(1,256,8,8)f32 #68=(1,256,4,4)f32
nn.Conv2d conv_body_down.6.conv2 1 1 68 69 bias=True dilation=(1,1) groups=1 in_channels=256 kernel_size=(3,3) out_channels=256 padding=(1,1) padding_mode=zeros stride=(1,1) @bias=(256)f32 @weight=(256,256,3,3)f32 #68=(1,256,4,4)f32 #69=(1,256,4,4)f32
pnnx.Expression pnnx_expr_2367 0 1 70 expr=2.000000e-01
aten::leaky_relu_ pnnx_125 2 1 69 70 71 #69=(1,256,4,4)f32 #71=(1,256,4,4)f32
F.upsample F.upsample_51 1 1 64 72 align_corners=False mode=bilinear scale_factor=(5.000000e-01,5.000000e-01) $input=64 #64=(1,256,8,8)f32 #72=(1,256,4,4)f32
nn.Conv2d conv_body_down.6.skip 1 1 72 73 bias=False dilation=(1,1) groups=1 in_channels=256 kernel_size=(1,1) out_channels=256 padding=(0,0) padding_mode=zeros stride=(1,1) @weight=(256,256,1,1)f32 #72=(1,256,4,4)f32 #73=(1,256,4,4)f32
pnnx.Expression pnnx_expr_2363 2 1 71 73 74 expr=add(@0,@1) #71=(1,256,4,4)f32 #73=(1,256,4,4)f32 #74=(1,256,4,4)f32
nn.Conv2d final_conv 1 1 74 75 bias=True dilation=(1,1) groups=1 in_channels=256 kernel_size=(3,3) out_channels=256 padding=(1,1) padding_mode=zeros stride=(1,1) @bias=(256)f32 @weight=(256,256,3,3)f32 #74=(1,256,4,4)f32 #75=(1,256,4,4)f32
pnnx.Expression pnnx_expr_2362 0 1 76 expr=2.000000e-01
aten::leaky_relu_ pnnx_134 2 1 75 76 77 #75=(1,256,4,4)f32 #77=(1,256,4,4)f32
Tensor.view Tensor.view_156 1 1 77 78 shape=(1,-1) $input=77 #77=(1,256,4,4)f32 #78=(1,4096)f32
nn.Linear final_linear 1 1 78 79 bias=True in_features=4096 out_features=8192 @bias=(8192)f32 @weight=(8192,4096)f32 #78=(1,4096)f32 #79=(1,8192)f32
pnnx.Expression pnnx_expr_2351 2 1 77 74 80 expr=add(@0,@1) #77=(1,256,4,4)f32 #74=(1,256,4,4)f32 #80=(1,256,4,4)f32
pnnx.Expression pnnx_expr_2349 0 1 81 expr=2.000000e-01
nn.Conv2d conv_body_up.0.conv1 1 1 80 82 bias=True dilation=(1,1) groups=1 in_channels=256 kernel_size=(3,3) out_channels=256 padding=(1,1) padding_mode=zeros stride=(1,1) @bias=(256)f32 @weight=(256,256,3,3)f32 #80=(1,256,4,4)f32 #82=(1,256,4,4)f32
aten::leaky_relu_ pnnx_153 2 1 82 81 83 #82=(1,256,4,4)f32 #83=(1,256,4,4)f32
F.upsample F.upsample_52 1 1 83 84 align_corners=False mode=bilinear scale_factor=(2.000000e+00,2.000000e+00) $input=83 #83=(1,256,4,4)f32 #84=(1,256,8,8)f32
nn.Conv2d conv_body_up.0.conv2 1 1 84 85 bias=True dilation=(1,1) groups=1 in_channels=256 kernel_size=(3,3) out_channels=256 padding=(1,1) padding_mode=zeros stride=(1,1) @bias=(256)f32 @weight=(256,256,3,3)f32 #84=(1,256,8,8)f32 #85=(1,256,8,8)f32
pnnx.Expression pnnx_expr_2345 0 1 86 expr=2.000000e-01
aten::leaky_relu_ pnnx_158 2 1 85 86 87 #85=(1,256,8,8)f32 #87=(1,256,8,8)f32
F.upsample F.upsample_53 1 1 80 88 align_corners=False mode=bilinear scale_factor=(2.000000e+00,2.000000e+00) $input=80 #80=(1,256,4,4)f32 #88=(1,256,8,8)f32
nn.Conv2d conv_body_up.0.skip 1 1 88 89 bias=False dilation=(1,1) groups=1 in_channels=256 kernel_size=(1,1) out_channels=256 padding=(0,0) padding_mode=zeros stride=(1,1) @weight=(256,256,1,1)f32 #88=(1,256,8,8)f32 #89=(1,256,8,8)f32
pnnx.Expression pnnx_expr_2341 2 1 87 89 90 expr=add(@0,@1) #87=(1,256,8,8)f32 #89=(1,256,8,8)f32 #90=(1,256,8,8)f32
nn.Conv2d condition_scale.0.0 1 1 90 91 bias=True dilation=(1,1) groups=1 in_channels=256 kernel_size=(3,3) out_channels=256 padding=(1,1) padding_mode=zeros stride=(1,1) @bias=(256)f32 @weight=(256,256,3,3)f32 #90=(1,256,8,8)f32 #91=(1,256,8,8)f32
nn.LeakyReLU condition_scale.0.1 1 1 91 92 negative_slope=2.000000e-01 #91=(1,256,8,8)f32 #92=(1,256,8,8)f32
nn.Conv2d condition_scale.0.2 1 1 92 93 bias=True dilation=(1,1) groups=1 in_channels=256 kernel_size=(3,3) out_channels=256 padding=(1,1) padding_mode=zeros stride=(1,1) @bias=(256)f32 @weight=(256,256,3,3)f32 #92=(1,256,8,8)f32 #93=(1,256,8,8)f32
aten::clone pnnx_166 2 1 93 1 94 #93=(1,256,8,8)f32 #94=(1,256,8,8)f32
nn.Conv2d condition_shift.0.0 1 1 90 95 bias=True dilation=(1,1) groups=1 in_channels=256 kernel_size=(3,3) out_channels=256 padding=(1,1) padding_mode=zeros stride=(1,1) @bias=(256)f32 @weight=(256,256,3,3)f32 #90=(1,256,8,8)f32 #95=(1,256,8,8)f32
nn.LeakyReLU condition_shift.0.1 1 1 95 96 negative_slope=2.000000e-01 #95=(1,256,8,8)f32 #96=(1,256,8,8)f32
nn.Conv2d condition_shift.0.2 1 1 96 97 bias=True dilation=(1,1) groups=1 in_channels=256 kernel_size=(3,3) out_channels=256 padding=(1,1) padding_mode=zeros stride=(1,1) @bias=(256)f32 @weight=(256,256,3,3)f32 #96=(1,256,8,8)f32 #97=(1,256,8,8)f32
pnnx.Expression pnnx_expr_2340 0 1 98 expr=None
aten::clone pnnx_168 2 1 97 98 99 #97=(1,256,8,8)f32 #99=(1,256,8,8)f32
nn.Conv2d toRGB.0 1 1 90 100 bias=True dilation=(1,1) groups=1 in_channels=256 kernel_size=(1,1) out_channels=3 padding=(0,0) padding_mode=zeros stride=(1,1) @bias=(3)f32 @weight=(3,256,1,1)f32 #90=(1,256,8,8)f32 #100=(1,3,8,8)f32
pnnx.Expression pnnx_expr_2338 2 1 90 64 101 expr=add(@0,@1) #90=(1,256,8,8)f32 #64=(1,256,8,8)f32 #101=(1,256,8,8)f32
pnnx.Expression pnnx_expr_2336 0 1 102 expr=2.000000e-01
nn.Conv2d conv_body_up.1.conv1 1 1 101 103 bias=True dilation=(1,1) groups=1 in_channels=256 kernel_size=(3,3) out_channels=256 padding=(1,1) padding_mode=zeros stride=(1,1) @bias=(256)f32 @weight=(256,256,3,3)f32 #101=(1,256,8,8)f32 #103=(1,256,8,8)f32
aten::leaky_relu_ pnnx_176 2 1 103 102 104 #103=(1,256,8,8)f32 #104=(1,256,8,8)f32
F.upsample F.upsample_54 1 1 104 105 align_corners=False mode=bilinear scale_factor=(2.000000e+00,2.000000e+00) $input=104 #104=(1,256,8,8)f32 #105=(1,256,16,16)f32
nn.Conv2d conv_body_up.1.conv2 1 1 105 106 bias=True dilation=(1,1) groups=1 in_channels=256 kernel_size=(3,3) out_channels=256 padding=(1,1) padding_mode=zeros stride=(1,1) @bias=(256)f32 @weight=(256,256,3,3)f32 #105=(1,256,16,16)f32 #106=(1,256,16,16)f32
pnnx.Expression pnnx_expr_2332 0 1 107 expr=2.000000e-01
aten::leaky_relu_ pnnx_181 2 1 106 107 108 #106=(1,256,16,16)f32 #108=(1,256,16,16)f32
F.upsample F.upsample_55 1 1 101 109 align_corners=False mode=bilinear scale_factor=(2.000000e+00,2.000000e+00) $input=101 #101=(1,256,8,8)f32 #109=(1,256,16,16)f32
nn.Conv2d conv_body_up.1.skip 1 1 109 110 bias=False dilation=(1,1) groups=1 in_channels=256 kernel_size=(1,1) out_channels=256 padding=(0,0) padding_mode=zeros stride=(1,1) @weight=(256,256,1,1)f32 #109=(1,256,16,16)f32 #110=(1,256,16,16)f32
pnnx.Expression pnnx_expr_2328 2 1 108 110 111 expr=add(@0,@1) #108=(1,256,16,16)f32 #110=(1,256,16,16)f32 #111=(1,256,16,16)f32
nn.Conv2d condition_scale.1.0 1 1 111 112 bias=True dilation=(1,1) groups=1 in_channels=256 kernel_size=(3,3) out_channels=256 padding=(1,1) padding_mode=zeros stride=(1,1) @bias=(256)f32 @weight=(256,256,3,3)f32 #111=(1,256,16,16)f32 #112=(1,256,16,16)f32
nn.LeakyReLU condition_scale.1.1 1 1 112 113 negative_slope=2.000000e-01 #112=(1,256,16,16)f32 #113=(1,256,16,16)f32
nn.Conv2d condition_scale.1.2 1 1 113 114 bias=True dilation=(1,1) groups=1 in_channels=256 kernel_size=(3,3) out_channels=256 padding=(1,1) padding_mode=zeros stride=(1,1) @bias=(256)f32 @weight=(256,256,3,3)f32 #113=(1,256,16,16)f32 #114=(1,256,16,16)f32
pnnx.Expression pnnx_expr_2327 0 1 115 expr=None
aten::clone pnnx_190 2 1 114 115 116 #114=(1,256,16,16)f32 #116=(1,256,16,16)f32
nn.Conv2d condition_shift.1.0 1 1 111 117 bias=True dilation=(1,1) groups=1 in_channels=256 kernel_size=(3,3) out_channels=256 padding=(1,1) padding_mode=zeros stride=(1,1) @bias=(256)f32 @weight=(256,256,3,3)f32 #111=(1,256,16,16)f32 #117=(1,256,16,16)f32
nn.LeakyReLU condition_shift.1.1 1 1 117 118 negative_slope=2.000000e-01 #117=(1,256,16,16)f32 #118=(1,256,16,16)f32
nn.Conv2d condition_shift.1.2 1 1 118 119 bias=True dilation=(1,1) groups=1 in_channels=256 kernel_size=(3,3) out_channels=256 padding=(1,1) padding_mode=zeros stride=(1,1) @bias=(256)f32 @weight=(256,256,3,3)f32 #118=(1,256,16,16)f32 #119=(1,256,16,16)f32
pnnx.Expression pnnx_expr_2326 0 1 120 expr=None
aten::clone pnnx_192 2 1 119 120 121 #119=(1,256,16,16)f32 #121=(1,256,16,16)f32
nn.Conv2d toRGB.1 1 1 111 122 bias=True dilation=(1,1) groups=1 in_channels=256 kernel_size=(1,1) out_channels=3 padding=(0,0) padding_mode=zeros stride=(1,1) @bias=(3)f32 @weight=(3,256,1,1)f32 #111=(1,256,16,16)f32 #122=(1,3,16,16)f32
pnnx.Expression pnnx_expr_2324 2 1 111 54 123 expr=add(@0,@1) #111=(1,256,16,16)f32 #54=(1,256,16,16)f32 #123=(1,256,16,16)f32
pnnx.Expression pnnx_expr_2322 0 1 124 expr=2.000000e-01
nn.Conv2d conv_body_up.2.conv1 1 1 123 125 bias=True dilation=(1,1) groups=1 in_channels=256 kernel_size=(3,3) out_channels=256 padding=(1,1) padding_mode=zeros stride=(1,1) @bias=(256)f32 @weight=(256,256,3,3)f32 #123=(1,256,16,16)f32 #125=(1,256,16,16)f32
aten::leaky_relu_ pnnx_200 2 1 125 124 126 #125=(1,256,16,16)f32 #126=(1,256,16,16)f32
F.upsample F.upsample_56 1 1 126 127 align_corners=False mode=bilinear scale_factor=(2.000000e+00,2.000000e+00) $input=126 #126=(1,256,16,16)f32 #127=(1,256,32,32)f32
nn.Conv2d conv_body_up.2.conv2 1 1 127 128 bias=True dilation=(1,1) groups=1 in_channels=256 kernel_size=(3,3) out_channels=256 padding=(1,1) padding_mode=zeros stride=(1,1) @bias=(256)f32 @weight=(256,256,3,3)f32 #127=(1,256,32,32)f32 #128=(1,256,32,32)f32
pnnx.Expression pnnx_expr_2318 0 1 129 expr=2.000000e-01
aten::leaky_relu_ pnnx_205 2 1 128 129 130 #128=(1,256,32,32)f32 #130=(1,256,32,32)f32
F.upsample F.upsample_57 1 1 123 131 align_corners=False mode=bilinear scale_factor=(2.000000e+00,2.000000e+00) $input=123 #123=(1,256,16,16)f32 #131=(1,256,32,32)f32
nn.Conv2d conv_body_up.2.skip 1 1 131 132 bias=False dilation=(1,1) groups=1 in_channels=256 kernel_size=(1,1) out_channels=256 padding=(0,0) padding_mode=zeros stride=(1,1) @weight=(256,256,1,1)f32 #131=(1,256,32,32)f32 #132=(1,256,32,32)f32
pnnx.Expression pnnx_expr_2314 2 1 130 132 133 expr=add(@0,@1) #130=(1,256,32,32)f32 #132=(1,256,32,32)f32 #133=(1,256,32,32)f32
nn.Conv2d condition_scale.2.0 1 1 133 134 bias=True dilation=(1,1) groups=1 in_channels=256 kernel_size=(3,3) out_channels=256 padding=(1,1) padding_mode=zeros stride=(1,1) @bias=(256)f32 @weight=(256,256,3,3)f32 #133=(1,256,32,32)f32 #134=(1,256,32,32)f32
nn.LeakyReLU condition_scale.2.1 1 1 134 135 negative_slope=2.000000e-01 #134=(1,256,32,32)f32 #135=(1,256,32,32)f32
nn.Conv2d condition_scale.2.2 1 1 135 136 bias=True dilation=(1,1) groups=1 in_channels=256 kernel_size=(3,3) out_channels=256 padding=(1,1) padding_mode=zeros stride=(1,1) @bias=(256)f32 @weight=(256,256,3,3)f32 #135=(1,256,32,32)f32 #136=(1,256,32,32)f32
pnnx.Expression pnnx_expr_2313 0 1 137 expr=None
aten::clone pnnx_214 2 1 136 137 138 #136=(1,256,32,32)f32 #138=(1,256,32,32)f32
nn.Conv2d condition_shift.2.0 1 1 133 139 bias=True dilation=(1,1) groups=1 in_channels=256 kernel_size=(3,3) out_channels=256 padding=(1,1) padding_mode=zeros stride=(1,1) @bias=(256)f32 @weight=(256,256,3,3)f32 #133=(1,256,32,32)f32 #139=(1,256,32,32)f32
nn.LeakyReLU condition_shift.2.1 1 1 139 140 negative_slope=2.000000e-01 #139=(1,256,32,32)f32 #140=(1,256,32,32)f32
nn.Conv2d condition_shift.2.2 1 1 140 141 bias=True dilation=(1,1) groups=1 in_channels=256 kernel_size=(3,3) out_channels=256 padding=(1,1) padding_mode=zeros stride=(1,1) @bias=(256)f32 @weight=(256,256,3,3)f32 #140=(1,256,32,32)f32 #141=(1,256,32,32)f32
pnnx.Expression pnnx_expr_2312 0 1 142 expr=None
aten::clone pnnx_216 2 1 141 142 143 #141=(1,256,32,32)f32 #143=(1,256,32,32)f32
nn.Conv2d toRGB.2 1 1 133 144 bias=True dilation=(1,1) groups=1 in_channels=256 kernel_size=(1,1) out_channels=3 padding=(0,0) padding_mode=zeros stride=(1,1) @bias=(3)f32 @weight=(3,256,1,1)f32 #133=(1,256,32,32)f32 #144=(1,3,32,32)f32
pnnx.Expression pnnx_expr_2310 2 1 133 44 145 expr=add(@0,@1) #133=(1,256,32,32)f32 #44=(1,256,32,32)f32 #145=(1,256,32,32)f32
pnnx.Expression pnnx_expr_2308 0 1 146 expr=2.000000e-01
nn.Conv2d conv_body_up.3.conv1 1 1 145 147 bias=True dilation=(1,1) groups=1 in_channels=256 kernel_size=(3,3) out_channels=256 padding=(1,1) padding_mode=zeros stride=(1,1) @bias=(256)f32 @weight=(256,256,3,3)f32 #145=(1,256,32,32)f32 #147=(1,256,32,32)f32
aten::leaky_relu_ pnnx_224 2 1 147 146 148 #147=(1,256,32,32)f32 #148=(1,256,32,32)f32
F.upsample F.upsample_58 1 1 148 149 align_corners=False mode=bilinear scale_factor=(2.000000e+00,2.000000e+00) $input=148 #148=(1,256,32,32)f32 #149=(1,256,64,64)f32
nn.Conv2d conv_body_up.3.conv2 1 1 149 150 bias=True dilation=(1,1) groups=1 in_channels=256 kernel_size=(3,3) out_channels=256 padding=(1,1) padding_mode=zeros stride=(1,1) @bias=(256)f32 @weight=(256,256,3,3)f32 #149=(1,256,64,64)f32 #150=(1,256,64,64)f32
pnnx.Expression pnnx_expr_2304 0 1 151 expr=2.000000e-01
aten::leaky_relu_ pnnx_229 2 1 150 151 152 #150=(1,256,64,64)f32 #152=(1,256,64,64)f32
F.upsample F.upsample_59 1 1 145 153 align_corners=False mode=bilinear scale_factor=(2.000000e+00,2.000000e+00) $input=145 #145=(1,256,32,32)f32 #153=(1,256,64,64)f32
nn.Conv2d conv_body_up.3.skip 1 1 153 154 bias=False dilation=(1,1) groups=1 in_channels=256 kernel_size=(1,1) out_channels=256 padding=(0,0) padding_mode=zeros stride=(1,1) @weight=(256,256,1,1)f32 #153=(1,256,64,64)f32 #154=(1,256,64,64)f32
pnnx.Expression pnnx_expr_2300 2 1 152 154 155 expr=add(@0,@1) #152=(1,256,64,64)f32 #154=(1,256,64,64)f32 #155=(1,256,64,64)f32
nn.Conv2d condition_scale.3.0 1 1 155 156 bias=True dilation=(1,1) groups=1 in_channels=256 kernel_size=(3,3) out_channels=256 padding=(1,1) padding_mode=zeros stride=(1,1) @bias=(256)f32 @weight=(256,256,3,3)f32 #155=(1,256,64,64)f32 #156=(1,256,64,64)f32
nn.LeakyReLU condition_scale.3.1 1 1 156 157 negative_slope=2.000000e-01 #156=(1,256,64,64)f32 #157=(1,256,64,64)f32
nn.Conv2d condition_scale.3.2 1 1 157 158 bias=True dilation=(1,1) groups=1 in_channels=256 kernel_size=(3,3) out_channels=256 padding=(1,1) padding_mode=zeros stride=(1,1) @bias=(256)f32 @weight=(256,256,3,3)f32 #157=(1,256,64,64)f32 #158=(1,256,64,64)f32
pnnx.Expression pnnx_expr_2299 0 1 159 expr=None
aten::clone pnnx_238 2 1 158 159 160 #158=(1,256,64,64)f32 #160=(1,256,64,64)f32
nn.Conv2d condition_shift.3.0 1 1 155 161 bias=True dilation=(1,1) groups=1 in_channels=256 kernel_size=(3,3) out_channels=256 padding=(1,1) padding_mode=zeros stride=(1,1) @bias=(256)f32 @weight=(256,256,3,3)f32 #155=(1,256,64,64)f32 #161=(1,256,64,64)f32
nn.LeakyReLU condition_shift.3.1 1 1 161 162 negative_slope=2.000000e-01 #161=(1,256,64,64)f32 #162=(1,256,64,64)f32
nn.Conv2d condition_shift.3.2 1 1 162 163 bias=True dilation=(1,1) groups=1 in_channels=256 kernel_size=(3,3) out_channels=256 padding=(1,1) padding_mode=zeros stride=(1,1) @bias=(256)f32 @weight=(256,256,3,3)f32 #162=(1,256,64,64)f32 #163=(1,256,64,64)f32
pnnx.Expression pnnx_expr_2298 0 1 164 expr=None
aten::clone pnnx_240 2 1 163 164 165 #163=(1,256,64,64)f32 #165=(1,256,64,64)f32
nn.Conv2d toRGB.3 1 1 155 166 bias=True dilation=(1,1) groups=1 in_channels=256 kernel_size=(1,1) out_channels=3 padding=(0,0) padding_mode=zeros stride=(1,1) @bias=(3)f32 @weight=(3,256,1,1)f32 #155=(1,256,64,64)f32 #166=(1,3,64,64)f32
pnnx.Expression pnnx_expr_2296 2 1 155 34 167 expr=add(@0,@1) #155=(1,256,64,64)f32 #34=(1,256,64,64)f32 #167=(1,256,64,64)f32
pnnx.Expression pnnx_expr_2294 0 1 168 expr=2.000000e-01
nn.Conv2d conv_body_up.4.conv1 1 1 167 169 bias=True dilation=(1,1) groups=1 in_channels=256 kernel_size=(3,3) out_channels=256 padding=(1,1) padding_mode=zeros stride=(1,1) @bias=(256)f32 @weight=(256,256,3,3)f32 #167=(1,256,64,64)f32 #169=(1,256,64,64)f32
aten::leaky_relu_ pnnx_248 2 1 169 168 170 #169=(1,256,64,64)f32 #170=(1,256,64,64)f32
F.upsample F.upsample_60 1 1 170 171 align_corners=False mode=bilinear scale_factor=(2.000000e+00,2.000000e+00) $input=170 #170=(1,256,64,64)f32 #171=(1,256,128,128)f32
nn.Conv2d conv_body_up.4.conv2 1 1 171 172 bias=True dilation=(1,1) groups=1 in_channels=256 kernel_size=(3,3) out_channels=128 padding=(1,1) padding_mode=zeros stride=(1,1) @bias=(128)f32 @weight=(128,256,3,3)f32 #171=(1,256,128,128)f32 #172=(1,128,128,128)f32
pnnx.Expression pnnx_expr_2290 0 1 173 expr=2.000000e-01
aten::leaky_relu_ pnnx_253 2 1 172 173 174 #172=(1,128,128,128)f32 #174=(1,128,128,128)f32
F.upsample F.upsample_61 1 1 167 175 align_corners=False mode=bilinear scale_factor=(2.000000e+00,2.000000e+00) $input=167 #167=(1,256,64,64)f32 #175=(1,256,128,128)f32
nn.Conv2d conv_body_up.4.skip 1 1 175 176 bias=False dilation=(1,1) groups=1 in_channels=256 kernel_size=(1,1) out_channels=128 padding=(0,0) padding_mode=zeros stride=(1,1) @weight=(128,256,1,1)f32 #175=(1,256,128,128)f32 #176=(1,128,128,128)f32
pnnx.Expression pnnx_expr_2286 2 1 174 176 177 expr=add(@0,@1) #174=(1,128,128,128)f32 #176=(1,128,128,128)f32 #177=(1,128,128,128)f32
nn.Conv2d condition_scale.4.0 1 1 177 178 bias=True dilation=(1,1) groups=1 in_channels=128 kernel_size=(3,3) out_channels=128 padding=(1,1) padding_mode=zeros stride=(1,1) @bias=(128)f32 @weight=(128,128,3,3)f32 #177=(1,128,128,128)f32 #178=(1,128,128,128)f32
nn.LeakyReLU condition_scale.4.1 1 1 178 179 negative_slope=2.000000e-01 #178=(1,128,128,128)f32 #179=(1,128,128,128)f32
nn.Conv2d condition_scale.4.2 1 1 179 180 bias=True dilation=(1,1) groups=1 in_channels=128 kernel_size=(3,3) out_channels=128 padding=(1,1) padding_mode=zeros stride=(1,1) @bias=(128)f32 @weight=(128,128,3,3)f32 #179=(1,128,128,128)f32 #180=(1,128,128,128)f32
pnnx.Expression pnnx_expr_2285 0 1 181 expr=None
aten::clone pnnx_262 2 1 180 181 182 #180=(1,128,128,128)f32 #182=(1,128,128,128)f32
nn.Conv2d condition_shift.4.0 1 1 177 183 bias=True dilation=(1,1) groups=1 in_channels=128 kernel_size=(3,3) out_channels=128 padding=(1,1) padding_mode=zeros stride=(1,1) @bias=(128)f32 @weight=(128,128,3,3)f32 #177=(1,128,128,128)f32 #183=(1,128,128,128)f32
nn.LeakyReLU condition_shift.4.1 1 1 183 184 negative_slope=2.000000e-01 #183=(1,128,128,128)f32 #184=(1,128,128,128)f32
nn.Conv2d condition_shift.4.2 1 1 184 185 bias=True dilation=(1,1) groups=1 in_channels=128 kernel_size=(3,3) out_channels=128 padding=(1,1) padding_mode=zeros stride=(1,1) @bias=(128)f32 @weight=(128,128,3,3)f32 #184=(1,128,128,128)f32 #185=(1,128,128,128)f32
pnnx.Expression pnnx_expr_2284 0 1 186 expr=None
aten::clone pnnx_264 2 1 185 186 187 #185=(1,128,128,128)f32 #187=(1,128,128,128)f32
nn.Conv2d toRGB.4 1 1 177 188 bias=True dilation=(1,1) groups=1 in_channels=128 kernel_size=(1,1) out_channels=3 padding=(0,0) padding_mode=zeros stride=(1,1) @bias=(3)f32 @weight=(3,128,1,1)f32 #177=(1,128,128,128)f32 #188=(1,3,128,128)f32
pnnx.Expression pnnx_expr_2282 2 1 177 24 189 expr=add(@0,@1) #177=(1,128,128,128)f32 #24=(1,128,128,128)f32 #189=(1,128,128,128)f32
pnnx.Expression pnnx_expr_2280 0 1 190 expr=2.000000e-01
nn.Conv2d conv_body_up.5.conv1 1 1 189 191 bias=True dilation=(1,1) groups=1 in_channels=128 kernel_size=(3,3) out_channels=128 padding=(1,1) padding_mode=zeros stride=(1,1) @bias=(128)f32 @weight=(128,128,3,3)f32 #189=(1,128,128,128)f32 #191=(1,128,128,128)f32
aten::leaky_relu_ pnnx_272 2 1 191 190 192 #191=(1,128,128,128)f32 #192=(1,128,128,128)f32
F.upsample F.upsample_62 1 1 192 193 align_corners=False mode=bilinear scale_factor=(2.000000e+00,2.000000e+00) $input=192 #192=(1,128,128,128)f32 #193=(1,128,256,256)f32
nn.Conv2d conv_body_up.5.conv2 1 1 193 194 bias=True dilation=(1,1) groups=1 in_channels=128 kernel_size=(3,3) out_channels=64 padding=(1,1) padding_mode=zeros stride=(1,1) @bias=(64)f32 @weight=(64,128,3,3)f32 #193=(1,128,256,256)f32 #194=(1,64,256,256)f32
pnnx.Expression pnnx_expr_2276 0 1 195 expr=2.000000e-01
aten::leaky_relu_ pnnx_277 2 1 194 195 196 #194=(1,64,256,256)f32 #196=(1,64,256,256)f32
F.upsample F.upsample_63 1 1 189 197 align_corners=False mode=bilinear scale_factor=(2.000000e+00,2.000000e+00) $input=189 #189=(1,128,128,128)f32 #197=(1,128,256,256)f32
nn.Conv2d conv_body_up.5.skip 1 1 197 198 bias=False dilation=(1,1) groups=1 in_channels=128 kernel_size=(1,1) out_channels=64 padding=(0,0) padding_mode=zeros stride=(1,1) @weight=(64,128,1,1)f32 #197=(1,128,256,256)f32 #198=(1,64,256,256)f32
pnnx.Expression pnnx_expr_2272 2 1 196 198 199 expr=add(@0,@1) #196=(1,64,256,256)f32 #198=(1,64,256,256)f32 #199=(1,64,256,256)f32
nn.Conv2d condition_scale.5.0 1 1 199 200 bias=True dilation=(1,1) groups=1 in_channels=64 kernel_size=(3,3) out_channels=64 padding=(1,1) padding_mode=zeros stride=(1,1) @bias=(64)f32 @weight=(64,64,3,3)f32 #199=(1,64,256,256)f32 #200=(1,64,256,256)f32
nn.LeakyReLU condition_scale.5.1 1 1 200 201 negative_slope=2.000000e-01 #200=(1,64,256,256)f32 #201=(1,64,256,256)f32
nn.Conv2d condition_scale.5.2 1 1 201 202 bias=True dilation=(1,1) groups=1 in_channels=64 kernel_size=(3,3) out_channels=64 padding=(1,1) padding_mode=zeros stride=(1,1) @bias=(64)f32 @weight=(64,64,3,3)f32 #201=(1,64,256,256)f32 #202=(1,64,256,256)f32
pnnx.Expression pnnx_expr_2271 0 1 203 expr=None
aten::clone pnnx_286 2 1 202 203 204 #202=(1,64,256,256)f32 #204=(1,64,256,256)f32
nn.Conv2d condition_shift.5.0 1 1 199 205 bias=True dilation=(1,1) groups=1 in_channels=64 kernel_size=(3,3) out_channels=64 padding=(1,1) padding_mode=zeros stride=(1,1) @bias=(64)f32 @weight=(64,64,3,3)f32 #199=(1,64,256,256)f32 #205=(1,64,256,256)f32
nn.LeakyReLU condition_shift.5.1 1 1 205 206 negative_slope=2.000000e-01 #205=(1,64,256,256)f32 #206=(1,64,256,256)f32
nn.Conv2d condition_shift.5.2 1 1 206 207 bias=True dilation=(1,1) groups=1 in_channels=64 kernel_size=(3,3) out_channels=64 padding=(1,1) padding_mode=zeros stride=(1,1) @bias=(64)f32 @weight=(64,64,3,3)f32 #206=(1,64,256,256)f32 #207=(1,64,256,256)f32
pnnx.Expression pnnx_expr_2270 0 1 208 expr=None
aten::clone pnnx_288 2 1 207 208 209 #207=(1,64,256,256)f32 #209=(1,64,256,256)f32
nn.Conv2d toRGB.5 1 1 199 210 bias=True dilation=(1,1) groups=1 in_channels=64 kernel_size=(1,1) out_channels=3 padding=(0,0) padding_mode=zeros stride=(1,1) @bias=(3)f32 @weight=(3,64,1,1)f32 #199=(1,64,256,256)f32 #210=(1,3,256,256)f32
pnnx.Expression pnnx_expr_2268 2 1 199 14 211 expr=add(@0,@1) #199=(1,64,256,256)f32 #14=(1,64,256,256)f32 #211=(1,64,256,256)f32
pnnx.Expression pnnx_expr_2266 0 1 212 expr=2.000000e-01
nn.Conv2d conv_body_up.6.conv1 1 1 211 213 bias=True dilation=(1,1) groups=1 in_channels=64 kernel_size=(3,3) out_channels=64 padding=(1,1) padding_mode=zeros stride=(1,1) @bias=(64)f32 @weight=(64,64,3,3)f32 #211=(1,64,256,256)f32 #213=(1,64,256,256)f32
aten::leaky_relu_ pnnx_296 2 1 213 212 214 #213=(1,64,256,256)f32 #214=(1,64,256,256)f32
F.upsample F.upsample_64 1 1 214 215 align_corners=False mode=bilinear scale_factor=(2.000000e+00,2.000000e+00) $input=214 #214=(1,64,256,256)f32 #215=(1,64,512,512)f32
nn.Conv2d conv_body_up.6.conv2 1 1 215 216 bias=True dilation=(1,1) groups=1 in_channels=64 kernel_size=(3,3) out_channels=32 padding=(1,1) padding_mode=zeros stride=(1,1) @bias=(32)f32 @weight=(32,64,3,3)f32 #215=(1,64,512,512)f32 #216=(1,32,512,512)f32
pnnx.Expression pnnx_expr_2262 0 1 217 expr=2.000000e-01
aten::leaky_relu_ pnnx_301 2 1 216 217 218 #216=(1,32,512,512)f32 #218=(1,32,512,512)f32
F.upsample F.upsample_65 1 1 211 219 align_corners=False mode=bilinear scale_factor=(2.000000e+00,2.000000e+00) $input=211 #211=(1,64,256,256)f32 #219=(1,64,512,512)f32
nn.Conv2d conv_body_up.6.skip 1 1 219 220 bias=False dilation=(1,1) groups=1 in_channels=64 kernel_size=(1,1) out_channels=32 padding=(0,0) padding_mode=zeros stride=(1,1) @weight=(32,64,1,1)f32 #219=(1,64,512,512)f32 #220=(1,32,512,512)f32
pnnx.Expression pnnx_expr_2258 2 1 218 220 221 expr=add(@0,@1) #218=(1,32,512,512)f32 #220=(1,32,512,512)f32 #221=(1,32,512,512)f32
nn.Conv2d condition_scale.6.0 1 1 221 222 bias=True dilation=(1,1) groups=1 in_channels=32 kernel_size=(3,3) out_channels=32 padding=(1,1) padding_mode=zeros stride=(1,1) @bias=(32)f32 @weight=(32,32,3,3)f32 #221=(1,32,512,512)f32 #222=(1,32,512,512)f32
nn.LeakyReLU condition_scale.6.1 1 1 222 223 negative_slope=2.000000e-01 #222=(1,32,512,512)f32 #223=(1,32,512,512)f32
nn.Conv2d condition_scale.6.2 1 1 223 224 bias=True dilation=(1,1) groups=1 in_channels=32 kernel_size=(3,3) out_channels=32 padding=(1,1) padding_mode=zeros stride=(1,1) @bias=(32)f32 @weight=(32,32,3,3)f32 #223=(1,32,512,512)f32 #224=(1,32,512,512)f32
pnnx.Expression pnnx_expr_2257 0 1 225 expr=None
aten::clone pnnx_310 2 1 224 225 226 #224=(1,32,512,512)f32 #226=(1,32,512,512)f32
nn.Conv2d condition_shift.6.0 1 1 221 227 bias=True dilation=(1,1) groups=1 in_channels=32 kernel_size=(3,3) out_channels=32 padding=(1,1) padding_mode=zeros stride=(1,1) @bias=(32)f32 @weight=(32,32,3,3)f32 #221=(1,32,512,512)f32 #227=(1,32,512,512)f32
nn.LeakyReLU condition_shift.6.1 1 1 227 228 negative_slope=2.000000e-01 #227=(1,32,512,512)f32 #228=(1,32,512,512)f32
nn.Conv2d condition_shift.6.2 1 1 228 229 bias=True dilation=(1,1) groups=1 in_channels=32 kernel_size=(3,3) out_channels=32 padding=(1,1) padding_mode=zeros stride=(1,1) @bias=(32)f32 @weight=(32,32,3,3)f32 #228=(1,32,512,512)f32 #229=(1,32,512,512)f32
pnnx.Expression pnnx_expr_2256 0 1 230 expr=None
aten::clone pnnx_312 2 1 229 230 231 #229=(1,32,512,512)f32 #231=(1,32,512,512)f32
nn.Conv2d toRGB.6 1 1 221 232 bias=True dilation=(1,1) groups=1 in_channels=32 kernel_size=(1,1) out_channels=3 padding=(0,0) padding_mode=zeros stride=(1,1) @bias=(3)f32 @weight=(3,32,1,1)f32 #221=(1,32,512,512)f32 #232=(1,3,512,512)f32
Tensor.view Tensor.view_157 1 1 79 233 shape=(1,-1,512) $input=79 #79=(1,8192)f32 #233=(1,16,512)f32
pnnx.Attribute stylegan_decoder.constant_input 0 1 234 @weight=(1,512,4,4)f32 #234=(1,512,4,4)f32
pnnx.Expression pnnx_expr_2223 0 1 235 expr=None
pnnx.Expression pnnx_expr_2222 0 1 236 expr=1.000000e+00
pnnx.Expression pnnx_expr_2221 0 1 237 expr=0.000000e+00
pnnx.Attribute stylegan_decoder.style_conv1 0 1 238 @bias=(1,512,1,1)f32 #238=(1,512,1,1)f32
pnnx.Attribute stylegan_decoder.style_conv1 0 1 239 @weight=(1)f32 #239=(1)f32
pnnx.Attribute stylegan_decoder.style_conv1.modulated_conv 0 1 240 @weight=(1,512,512,3,3)f32 #240=(1,512,512,3,3)f32
Tensor.repeat Tensor.repeat_80 1 1 234 241 sizes=(1,1,1,1) $input=234 #234=(1,512,4,4)f32 #241=(1,512,4,4)f32
Tensor.select Tensor.select_81 1 1 233 242 dim=1 index=0 $input=233 #233=(1,16,512)f32 #242=(1,512)f32
nn.Linear stylegan_decoder.style_conv1.modulated_conv.modulation 1 1 242 243 bias=True in_features=512 out_features=512 @bias=(512)f32 @weight=(512,512)f32 #242=(1,512)f32 #243=(1,512)f32
Tensor.view Tensor.view_158 1 1 243 244 shape=(1,1,512,1,1) $input=243 #243=(1,512)f32 #244=(1,1,512,1,1)f32
pnnx.Expression pnnx_expr_2195 2 1 240 244 245 expr=mul(@0,@1) #240=(1,512,512,3,3)f32 #244=(1,1,512,1,1)f32 #245=(1,512,512,3,3)f32
pnnx.Expression pnnx_expr_2194 1 1 245 246 expr=pow(@0,2) #245=(1,512,512,3,3)f32 #246=(1,512,512,3,3)f32
torch.sum torch.sum_111 1 1 246 247 dim=(2,3,4) keepdim=False $input=246 #246=(1,512,512,3,3)f32 #247=(1,512)f32
pnnx.Expression pnnx_expr_2189 1 1 247 248 expr=rsqrt(add(@0,1.000000e-08)) #247=(1,512)f32 #248=(1,512)f32
Tensor.view Tensor.view_159 1 1 248 249 shape=(1,512,1,1,1) $input=248 #248=(1,512)f32 #249=(1,512,1,1,1)f32
pnnx.Expression pnnx_expr_2184 2 1 245 249 250 expr=mul(@0,@1) #245=(1,512,512,3,3)f32 #249=(1,512,1,1,1)f32 #250=(1,512,512,3,3)f32
Tensor.view Tensor.view_160 1 1 250 251 shape=(512,512,3,3) $input=250 #250=(1,512,512,3,3)f32 #251=(512,512,3,3)f32
F.conv2d F.conv2d_15 2 1 241 251 252 bias=None dilation=(1,1) groups=1 padding=(1,1) stride=(1,1) $input=241 $weight=251 #241=(1,512,4,4)f32 #251=(512,512,3,3)f32 #252=(1,512,4,4)f32
pnnx.Expression pnnx_expr_2138 1 1 252 253 expr=mul(@0,1.414214e+00) #252=(1,512,4,4)f32 #253=(1,512,4,4)f32
Tensor.new_empty Tensor.new_empty_0 1 1 253 254 size=(1,1,4,4) $input=253 #253=(1,512,4,4)f32 #254=(1,1,4,4)f32
aten::normal_ pnnx_466 4 1 254 237 236 235 255 #254=(1,1,4,4)f32 #255=(1,1,4,4)f32
pnnx.Expression pnnx_expr_2123 4 1 253 239 255 238 256 expr=add(add(@0,mul(@1,@2)),@3) #253=(1,512,4,4)f32 #239=(1)f32 #255=(1,1,4,4)f32 #238=(1,512,1,1)f32 #256=(1,512,4,4)f32
nn.LeakyReLU stylegan_decoder.style_conv1.activate 1 1 256 257 negative_slope=2.000000e-01 #256=(1,512,4,4)f32 #257=(1,512,4,4)f32
pnnx.Attribute stylegan_decoder.to_rgb1 0 1 258 @bias=(1,3,1,1)f32 #258=(1,3,1,1)f32
pnnx.Attribute stylegan_decoder.to_rgb1.modulated_conv 0 1 259 @weight=(1,3,512,1,1)f32 #259=(1,3,512,1,1)f32
Tensor.select Tensor.select_82 1 1 233 260 dim=1 index=1 $input=233 #233=(1,16,512)f32 #260=(1,512)f32
nn.Linear stylegan_decoder.to_rgb1.modulated_conv.modulation 1 1 260 261 bias=True in_features=512 out_features=512 @bias=(512)f32 @weight=(512,512)f32 #260=(1,512)f32 #261=(1,512)f32
Tensor.view Tensor.view_163 1 1 261 262 shape=(1,1,512,1,1) $input=261 #261=(1,512)f32 #262=(1,1,512,1,1)f32
pnnx.Expression pnnx_expr_2098 2 1 259 262 263 expr=mul(@0,@1) #259=(1,3,512,1,1)f32 #262=(1,1,512,1,1)f32 #263=(1,3,512,1,1)f32
Tensor.view Tensor.view_164 1 1 263 264 shape=(3,512,1,1) $input=263 #263=(1,3,512,1,1)f32 #264=(3,512,1,1)f32
F.conv2d F.conv2d_16 2 1 257 264 265 bias=None dilation=(1,1) groups=1 padding=(0,0) stride=(1,1) $input=257 $weight=264 #257=(1,512,4,4)f32 #264=(3,512,1,1)f32 #265=(1,3,4,4)f32
pnnx.Expression pnnx_expr_2055 2 1 265 258 266 expr=add(@0,@1) #265=(1,3,4,4)f32 #258=(1,3,1,1)f32 #266=(1,3,4,4)f32
pnnx.Expression pnnx_expr_2048 0 1 267 expr=None
pnnx.Expression pnnx_expr_2047 0 1 268 expr=1.000000e+00
pnnx.Expression pnnx_expr_2046 0 1 269 expr=0.000000e+00
pnnx.Attribute stylegan_decoder.style_convs.0 0 1 270 @bias=(1,512,1,1)f32 #270=(1,512,1,1)f32
pnnx.Attribute stylegan_decoder.style_convs.0 0 1 271 @weight=(1)f32 #271=(1)f32
pnnx.Attribute stylegan_decoder.style_convs.0.modulated_conv 0 1 272 @weight=(1,512,512,3,3)f32 #272=(1,512,512,3,3)f32
Tensor.select Tensor.select_83 1 1 233 273 dim=1 index=1 $input=233 #233=(1,16,512)f32 #273=(1,512)f32
nn.Linear stylegan_decoder.style_convs.0.modulated_conv.modulation 1 1 273 274 bias=True in_features=512 out_features=512 @bias=(512)f32 @weight=(512,512)f32 #273=(1,512)f32 #274=(1,512)f32
Tensor.view Tensor.view_167 1 1 274 275 shape=(1,1,512,1,1) $input=274 #274=(1,512)f32 #275=(1,1,512,1,1)f32
pnnx.Expression pnnx_expr_2019 2 1 272 275 276 expr=mul(@0,@1) #272=(1,512,512,3,3)f32 #275=(1,1,512,1,1)f32 #276=(1,512,512,3,3)f32
pnnx.Expression pnnx_expr_2018 1 1 276 277 expr=pow(@0,2) #276=(1,512,512,3,3)f32 #277=(1,512,512,3,3)f32
torch.sum torch.sum_112 1 1 277 278 dim=(2,3,4) keepdim=False $input=277 #277=(1,512,512,3,3)f32 #278=(1,512)f32
pnnx.Expression pnnx_expr_2013 1 1 278 279 expr=rsqrt(add(@0,1.000000e-08)) #278=(1,512)f32 #279=(1,512)f32
Tensor.view Tensor.view_168 1 1 279 280 shape=(1,512,1,1,1) $input=279 #279=(1,512)f32 #280=(1,512,1,1,1)f32
pnnx.Expression pnnx_expr_2008 2 1 276 280 281 expr=mul(@0,@1) #276=(1,512,512,3,3)f32 #280=(1,512,1,1,1)f32 #281=(1,512,512,3,3)f32
F.upsample F.upsample_66 1 1 257 282 align_corners=False mode=bilinear scale_factor=(2.000000e+00,2.000000e+00) $input=257 #257=(1,512,4,4)f32 #282=(1,512,8,8)f32
Tensor.view Tensor.view_169 1 1 281 283 shape=(512,512,3,3) $input=281 #281=(1,512,512,3,3)f32 #283=(512,512,3,3)f32
F.conv2d F.conv2d_17 2 1 282 283 284 bias=None dilation=(1,1) groups=1 padding=(1,1) stride=(1,1) $input=282 $weight=283 #282=(1,512,8,8)f32 #283=(512,512,3,3)f32 #284=(1,512,8,8)f32
pnnx.Expression pnnx_expr_1960 1 1 284 285 expr=mul(@0,1.414214e+00) #284=(1,512,8,8)f32 #285=(1,512,8,8)f32
Tensor.new_empty Tensor.new_empty_1 1 1 285 286 size=(1,1,8,8) $input=285 #285=(1,512,8,8)f32 #286=(1,1,8,8)f32
aten::normal_ pnnx_687 4 1 286 269 268 267 287 #286=(1,1,8,8)f32 #287=(1,1,8,8)f32
pnnx.Expression pnnx_expr_1945 4 1 285 271 287 270 288 expr=add(add(@0,mul(@1,@2)),@3) #285=(1,512,8,8)f32 #271=(1)f32 #287=(1,1,8,8)f32 #270=(1,512,1,1)f32 #288=(1,512,8,8)f32
nn.LeakyReLU stylegan_decoder.style_convs.0.activate 1 1 288 289 negative_slope=2.000000e-01 #288=(1,512,8,8)f32 #289=(1,512,8,8)f32
torch.split torch.split_126 1 2 289 290 291 dim=1 split_size_or_sections=256 $tensor=289 #289=(1,512,8,8)f32 #290=(1,256,8,8)f32 #291=(1,256,8,8)f32
pnnx.Expression pnnx_expr_1941 3 1 291 94 99 292 expr=add(mul(@0,@1),@2) #291=(1,256,8,8)f32 #94=(1,256,8,8)f32 #99=(1,256,8,8)f32 #292=(1,256,8,8)f32
pnnx.Expression pnnx_expr_1934 0 1 293 expr=None
pnnx.Expression pnnx_expr_1933 0 1 294 expr=1.000000e+00
pnnx.Expression pnnx_expr_1932 0 1 295 expr=0.000000e+00
pnnx.Attribute stylegan_decoder.style_convs.1 0 1 296 @bias=(1,512,1,1)f32 #296=(1,512,1,1)f32
pnnx.Attribute stylegan_decoder.style_convs.1 0 1 297 @weight=(1)f32 #297=(1)f32
pnnx.Attribute stylegan_decoder.style_convs.1.modulated_conv 0 1 298 @weight=(1,512,512,3,3)f32 #298=(1,512,512,3,3)f32
torch.cat torch.cat_104 2 1 290 292 299 dim=1 #290=(1,256,8,8)f32 #292=(1,256,8,8)f32 #299=(1,512,8,8)f32
Tensor.select Tensor.select_84 1 1 233 300 dim=1 index=2 $input=233 #233=(1,16,512)f32 #300=(1,512)f32
nn.Linear stylegan_decoder.style_convs.1.modulated_conv.modulation 1 1 300 301 bias=True in_features=512 out_features=512 @bias=(512)f32 @weight=(512,512)f32 #300=(1,512)f32 #301=(1,512)f32
Tensor.view Tensor.view_172 1 1 301 302 shape=(1,1,512,1,1) $input=301 #301=(1,512)f32 #302=(1,1,512,1,1)f32
pnnx.Expression pnnx_expr_1906 2 1 298 302 303 expr=mul(@0,@1) #298=(1,512,512,3,3)f32 #302=(1,1,512,1,1)f32 #303=(1,512,512,3,3)f32
pnnx.Expression pnnx_expr_1905 1 1 303 304 expr=pow(@0,2) #303=(1,512,512,3,3)f32 #304=(1,512,512,3,3)f32
torch.sum torch.sum_113 1 1 304 305 dim=(2,3,4) keepdim=False $input=304 #304=(1,512,512,3,3)f32 #305=(1,512)f32
pnnx.Expression pnnx_expr_1900 1 1 305 306 expr=rsqrt(add(@0,1.000000e-08)) #305=(1,512)f32 #306=(1,512)f32
Tensor.view Tensor.view_173 1 1 306 307 shape=(1,512,1,1,1) $input=306 #306=(1,512)f32 #307=(1,512,1,1,1)f32
pnnx.Expression pnnx_expr_1895 2 1 303 307 308 expr=mul(@0,@1) #303=(1,512,512,3,3)f32 #307=(1,512,1,1,1)f32 #308=(1,512,512,3,3)f32
Tensor.view Tensor.view_174 1 1 308 309 shape=(512,512,3,3) $input=308 #308=(1,512,512,3,3)f32 #309=(512,512,3,3)f32
F.conv2d F.conv2d_18 2 1 299 309 310 bias=None dilation=(1,1) groups=1 padding=(1,1) stride=(1,1) $input=299 $weight=309 #299=(1,512,8,8)f32 #309=(512,512,3,3)f32 #310=(1,512,8,8)f32
pnnx.Expression pnnx_expr_1849 1 1 310 311 expr=mul(@0,1.414214e+00) #310=(1,512,8,8)f32 #311=(1,512,8,8)f32
Tensor.new_empty Tensor.new_empty_2 1 1 311 312 size=(1,1,8,8) $input=311 #311=(1,512,8,8)f32 #312=(1,1,8,8)f32
aten::normal_ pnnx_827 4 1 312 295 294 293 313 #312=(1,1,8,8)f32 #313=(1,1,8,8)f32
pnnx.Expression pnnx_expr_1834 4 1 311 297 313 296 314 expr=add(add(@0,mul(@1,@2)),@3) #311=(1,512,8,8)f32 #297=(1)f32 #313=(1,1,8,8)f32 #296=(1,512,1,1)f32 #314=(1,512,8,8)f32
nn.LeakyReLU stylegan_decoder.style_convs.1.activate 1 1 314 315 negative_slope=2.000000e-01 #314=(1,512,8,8)f32 #315=(1,512,8,8)f32
pnnx.Attribute stylegan_decoder.to_rgbs.0 0 1 316 @bias=(1,3,1,1)f32 #316=(1,3,1,1)f32
pnnx.Attribute stylegan_decoder.to_rgbs.0.modulated_conv 0 1 317 @weight=(1,3,512,1,1)f32 #317=(1,3,512,1,1)f32
Tensor.select Tensor.select_85 1 1 233 318 dim=1 index=3 $input=233 #233=(1,16,512)f32 #318=(1,512)f32
nn.Linear stylegan_decoder.to_rgbs.0.modulated_conv.modulation 1 1 318 319 bias=True in_features=512 out_features=512 @bias=(512)f32 @weight=(512,512)f32 #318=(1,512)f32 #319=(1,512)f32
Tensor.view Tensor.view_177 1 1 319 320 shape=(1,1,512,1,1) $input=319 #319=(1,512)f32 #320=(1,1,512,1,1)f32
pnnx.Expression pnnx_expr_1809 2 1 317 320 321 expr=mul(@0,@1) #317=(1,3,512,1,1)f32 #320=(1,1,512,1,1)f32 #321=(1,3,512,1,1)f32
Tensor.view Tensor.view_178 1 1 321 322 shape=(3,512,1,1) $input=321 #321=(1,3,512,1,1)f32 #322=(3,512,1,1)f32
F.conv2d F.conv2d_19 2 1 315 322 323 bias=None dilation=(1,1) groups=1 padding=(0,0) stride=(1,1) $input=315 $weight=322 #315=(1,512,8,8)f32 #322=(3,512,1,1)f32 #323=(1,3,8,8)f32
F.upsample F.upsample_67 1 1 266 324 align_corners=False mode=bilinear scale_factor=(2.000000e+00,2.000000e+00) $input=266 #266=(1,3,4,4)f32 #324=(1,3,8,8)f32
pnnx.Expression pnnx_expr_1762 3 1 323 316 324 325 expr=add(add(@0,@1),@2) #323=(1,3,8,8)f32 #316=(1,3,1,1)f32 #324=(1,3,8,8)f32 #325=(1,3,8,8)f32
pnnx.Expression pnnx_expr_1755 0 1 326 expr=None
pnnx.Expression pnnx_expr_1754 0 1 327 expr=1.000000e+00
pnnx.Expression pnnx_expr_1753 0 1 328 expr=0.000000e+00
pnnx.Attribute stylegan_decoder.style_convs.2 0 1 329 @bias=(1,512,1,1)f32 #329=(1,512,1,1)f32
pnnx.Attribute stylegan_decoder.style_convs.2 0 1 330 @weight=(1)f32 #330=(1)f32
pnnx.Attribute stylegan_decoder.style_convs.2.modulated_conv 0 1 331 @weight=(1,512,512,3,3)f32 #331=(1,512,512,3,3)f32
Tensor.select Tensor.select_86 1 1 233 332 dim=1 index=3 $input=233 #233=(1,16,512)f32 #332=(1,512)f32
nn.Linear stylegan_decoder.style_convs.2.modulated_conv.modulation 1 1 332 333 bias=True in_features=512 out_features=512 @bias=(512)f32 @weight=(512,512)f32 #332=(1,512)f32 #333=(1,512)f32
Tensor.view Tensor.view_181 1 1 333 334 shape=(1,1,512,1,1) $input=333 #333=(1,512)f32 #334=(1,1,512,1,1)f32
pnnx.Expression pnnx_expr_1726 2 1 331 334 335 expr=mul(@0,@1) #331=(1,512,512,3,3)f32 #334=(1,1,512,1,1)f32 #335=(1,512,512,3,3)f32
pnnx.Expression pnnx_expr_1725 1 1 335 336 expr=pow(@0,2) #335=(1,512,512,3,3)f32 #336=(1,512,512,3,3)f32
torch.sum torch.sum_114 1 1 336 337 dim=(2,3,4) keepdim=False $input=336 #336=(1,512,512,3,3)f32 #337=(1,512)f32
pnnx.Expression pnnx_expr_1720 1 1 337 338 expr=rsqrt(add(@0,1.000000e-08)) #337=(1,512)f32 #338=(1,512)f32
Tensor.view Tensor.view_182 1 1 338 339 shape=(1,512,1,1,1) $input=338 #338=(1,512)f32 #339=(1,512,1,1,1)f32
pnnx.Expression pnnx_expr_1715 2 1 335 339 340 expr=mul(@0,@1) #335=(1,512,512,3,3)f32 #339=(1,512,1,1,1)f32 #340=(1,512,512,3,3)f32
F.upsample F.upsample_68 1 1 315 341 align_corners=False mode=bilinear scale_factor=(2.000000e+00,2.000000e+00) $input=315 #315=(1,512,8,8)f32 #341=(1,512,16,16)f32
Tensor.view Tensor.view_183 1 1 340 342 shape=(512,512,3,3) $input=340 #340=(1,512,512,3,3)f32 #342=(512,512,3,3)f32
F.conv2d F.conv2d_20 2 1 341 342 343 bias=None dilation=(1,1) groups=1 padding=(1,1) stride=(1,1) $input=341 $weight=342 #341=(1,512,16,16)f32 #342=(512,512,3,3)f32 #343=(1,512,16,16)f32
pnnx.Expression pnnx_expr_1667 1 1 343 344 expr=mul(@0,1.414214e+00) #343=(1,512,16,16)f32 #344=(1,512,16,16)f32
Tensor.new_empty Tensor.new_empty_3 1 1 344 345 size=(1,1,16,16) $input=344 #344=(1,512,16,16)f32 #345=(1,1,16,16)f32
aten::normal_ pnnx_1055 4 1 345 328 327 326 346 #345=(1,1,16,16)f32 #346=(1,1,16,16)f32
pnnx.Expression pnnx_expr_1652 4 1 344 330 346 329 347 expr=add(add(@0,mul(@1,@2)),@3) #344=(1,512,16,16)f32 #330=(1)f32 #346=(1,1,16,16)f32 #329=(1,512,1,1)f32 #347=(1,512,16,16)f32
nn.LeakyReLU stylegan_decoder.style_convs.2.activate 1 1 347 348 negative_slope=2.000000e-01 #347=(1,512,16,16)f32 #348=(1,512,16,16)f32
torch.split torch.split_127 1 2 348 349 350 dim=1 split_size_or_sections=256 $tensor=348 #348=(1,512,16,16)f32 #349=(1,256,16,16)f32 #350=(1,256,16,16)f32
pnnx.Expression pnnx_expr_1647 3 1 350 116 121 351 expr=add(mul(@0,@1),@2) #350=(1,256,16,16)f32 #116=(1,256,16,16)f32 #121=(1,256,16,16)f32 #351=(1,256,16,16)f32
pnnx.Expression pnnx_expr_1640 0 1 352 expr=None
pnnx.Expression pnnx_expr_1639 0 1 353 expr=1.000000e+00
pnnx.Expression pnnx_expr_1638 0 1 354 expr=0.000000e+00
pnnx.Attribute stylegan_decoder.style_convs.3 0 1 355 @bias=(1,512,1,1)f32 #355=(1,512,1,1)f32
pnnx.Attribute stylegan_decoder.style_convs.3 0 1 356 @weight=(1)f32 #356=(1)f32
pnnx.Attribute stylegan_decoder.style_convs.3.modulated_conv 0 1 357 @weight=(1,512,512,3,3)f32 #357=(1,512,512,3,3)f32
torch.cat torch.cat_105 2 1 349 351 358 dim=1 #349=(1,256,16,16)f32 #351=(1,256,16,16)f32 #358=(1,512,16,16)f32
Tensor.select Tensor.select_87 1 1 233 359 dim=1 index=4 $input=233 #233=(1,16,512)f32 #359=(1,512)f32
nn.Linear stylegan_decoder.style_convs.3.modulated_conv.modulation 1 1 359 360 bias=True in_features=512 out_features=512 @bias=(512)f32 @weight=(512,512)f32 #359=(1,512)f32 #360=(1,512)f32
Tensor.view Tensor.view_186 1 1 360 361 shape=(1,1,512,1,1) $input=360 #360=(1,512)f32 #361=(1,1,512,1,1)f32
pnnx.Expression pnnx_expr_1612 2 1 357 361 362 expr=mul(@0,@1) #357=(1,512,512,3,3)f32 #361=(1,1,512,1,1)f32 #362=(1,512,512,3,3)f32
pnnx.Expression pnnx_expr_1611 1 1 362 363 expr=pow(@0,2) #362=(1,512,512,3,3)f32 #363=(1,512,512,3,3)f32
torch.sum torch.sum_115 1 1 363 364 dim=(2,3,4) keepdim=False $input=363 #363=(1,512,512,3,3)f32 #364=(1,512)f32
pnnx.Expression pnnx_expr_1606 1 1 364 365 expr=rsqrt(add(@0,1.000000e-08)) #364=(1,512)f32 #365=(1,512)f32
Tensor.view Tensor.view_187 1 1 365 366 shape=(1,512,1,1,1) $input=365 #365=(1,512)f32 #366=(1,512,1,1,1)f32
pnnx.Expression pnnx_expr_1601 2 1 362 366 367 expr=mul(@0,@1) #362=(1,512,512,3,3)f32 #366=(1,512,1,1,1)f32 #367=(1,512,512,3,3)f32
Tensor.view Tensor.view_188 1 1 367 368 shape=(512,512,3,3) $input=367 #367=(1,512,512,3,3)f32 #368=(512,512,3,3)f32
F.conv2d F.conv2d_21 2 1 358 368 369 bias=None dilation=(1,1) groups=1 padding=(1,1) stride=(1,1) $input=358 $weight=368 #358=(1,512,16,16)f32 #368=(512,512,3,3)f32 #369=(1,512,16,16)f32
pnnx.Expression pnnx_expr_1555 1 1 369 370 expr=mul(@0,1.414214e+00) #369=(1,512,16,16)f32 #370=(1,512,16,16)f32
Tensor.new_empty Tensor.new_empty_4 1 1 370 371 size=(1,1,16,16) $input=370 #370=(1,512,16,16)f32 #371=(1,1,16,16)f32
aten::normal_ pnnx_1196 4 1 371 354 353 352 372 #371=(1,1,16,16)f32 #372=(1,1,16,16)f32
pnnx.Expression pnnx_expr_1540 4 1 370 356 372 355 373 expr=add(add(@0,mul(@1,@2)),@3) #370=(1,512,16,16)f32 #356=(1)f32 #372=(1,1,16,16)f32 #355=(1,512,1,1)f32 #373=(1,512,16,16)f32
nn.LeakyReLU stylegan_decoder.style_convs.3.activate 1 1 373 374 negative_slope=2.000000e-01 #373=(1,512,16,16)f32 #374=(1,512,16,16)f32
pnnx.Attribute stylegan_decoder.to_rgbs.1 0 1 375 @bias=(1,3,1,1)f32 #375=(1,3,1,1)f32
pnnx.Attribute stylegan_decoder.to_rgbs.1.modulated_conv 0 1 376 @weight=(1,3,512,1,1)f32 #376=(1,3,512,1,1)f32
Tensor.select Tensor.select_88 1 1 233 377 dim=1 index=5 $input=233 #233=(1,16,512)f32 #377=(1,512)f32
nn.Linear stylegan_decoder.to_rgbs.1.modulated_conv.modulation 1 1 377 378 bias=True in_features=512 out_features=512 @bias=(512)f32 @weight=(512,512)f32 #377=(1,512)f32 #378=(1,512)f32
Tensor.view Tensor.view_191 1 1 378 379 shape=(1,1,512,1,1) $input=378 #378=(1,512)f32 #379=(1,1,512,1,1)f32
pnnx.Expression pnnx_expr_1515 2 1 376 379 380 expr=mul(@0,@1) #376=(1,3,512,1,1)f32 #379=(1,1,512,1,1)f32 #380=(1,3,512,1,1)f32
Tensor.view Tensor.view_192 1 1 380 381 shape=(3,512,1,1) $input=380 #380=(1,3,512,1,1)f32 #381=(3,512,1,1)f32
F.conv2d F.conv2d_22 2 1 374 381 382 bias=None dilation=(1,1) groups=1 padding=(0,0) stride=(1,1) $input=374 $weight=381 #374=(1,512,16,16)f32 #381=(3,512,1,1)f32 #382=(1,3,16,16)f32
F.upsample F.upsample_69 1 1 325 383 align_corners=False mode=bilinear scale_factor=(2.000000e+00,2.000000e+00) $input=325 #325=(1,3,8,8)f32 #383=(1,3,16,16)f32
pnnx.Expression pnnx_expr_1468 3 1 382 375 383 384 expr=add(add(@0,@1),@2) #382=(1,3,16,16)f32 #375=(1,3,1,1)f32 #383=(1,3,16,16)f32 #384=(1,3,16,16)f32
pnnx.Expression pnnx_expr_1461 0 1 385 expr=None
pnnx.Expression pnnx_expr_1460 0 1 386 expr=1.000000e+00
pnnx.Expression pnnx_expr_1459 0 1 387 expr=0.000000e+00
pnnx.Attribute stylegan_decoder.style_convs.4 0 1 388 @bias=(1,512,1,1)f32 #388=(1,512,1,1)f32
pnnx.Attribute stylegan_decoder.style_convs.4 0 1 389 @weight=(1)f32 #389=(1)f32
pnnx.Attribute stylegan_decoder.style_convs.4.modulated_conv 0 1 390 @weight=(1,512,512,3,3)f32 #390=(1,512,512,3,3)f32
Tensor.select Tensor.select_89 1 1 233 391 dim=1 index=5 $input=233 #233=(1,16,512)f32 #391=(1,512)f32
nn.Linear stylegan_decoder.style_convs.4.modulated_conv.modulation 1 1 391 392 bias=True in_features=512 out_features=512 @bias=(512)f32 @weight=(512,512)f32 #391=(1,512)f32 #392=(1,512)f32
Tensor.view Tensor.view_195 1 1 392 393 shape=(1,1,512,1,1) $input=392 #392=(1,512)f32 #393=(1,1,512,1,1)f32
pnnx.Expression pnnx_expr_1432 2 1 390 393 394 expr=mul(@0,@1) #390=(1,512,512,3,3)f32 #393=(1,1,512,1,1)f32 #394=(1,512,512,3,3)f32
pnnx.Expression pnnx_expr_1431 1 1 394 395 expr=pow(@0,2) #394=(1,512,512,3,3)f32 #395=(1,512,512,3,3)f32
torch.sum torch.sum_116 1 1 395 396 dim=(2,3,4) keepdim=False $input=395 #395=(1,512,512,3,3)f32 #396=(1,512)f32
pnnx.Expression pnnx_expr_1426 1 1 396 397 expr=rsqrt(add(@0,1.000000e-08)) #396=(1,512)f32 #397=(1,512)f32
Tensor.view Tensor.view_196 1 1 397 398 shape=(1,512,1,1,1) $input=397 #397=(1,512)f32 #398=(1,512,1,1,1)f32
pnnx.Expression pnnx_expr_1421 2 1 394 398 399 expr=mul(@0,@1) #394=(1,512,512,3,3)f32 #398=(1,512,1,1,1)f32 #399=(1,512,512,3,3)f32
F.upsample F.upsample_70 1 1 374 400 align_corners=False mode=bilinear scale_factor=(2.000000e+00,2.000000e+00) $input=374 #374=(1,512,16,16)f32 #400=(1,512,32,32)f32
Tensor.view Tensor.view_197 1 1 399 401 shape=(512,512,3,3) $input=399 #399=(1,512,512,3,3)f32 #401=(512,512,3,3)f32
F.conv2d F.conv2d_23 2 1 400 401 402 bias=None dilation=(1,1) groups=1 padding=(1,1) stride=(1,1) $input=400 $weight=401 #400=(1,512,32,32)f32 #401=(512,512,3,3)f32 #402=(1,512,32,32)f32
pnnx.Expression pnnx_expr_1373 1 1 402 403 expr=mul(@0,1.414214e+00) #402=(1,512,32,32)f32 #403=(1,512,32,32)f32
Tensor.new_empty Tensor.new_empty_5 1 1 403 404 size=(1,1,32,32) $input=403 #403=(1,512,32,32)f32 #404=(1,1,32,32)f32
aten::normal_ pnnx_1424 4 1 404 387 386 385 405 #404=(1,1,32,32)f32 #405=(1,1,32,32)f32
pnnx.Expression pnnx_expr_1358 4 1 403 389 405 388 406 expr=add(add(@0,mul(@1,@2)),@3) #403=(1,512,32,32)f32 #389=(1)f32 #405=(1,1,32,32)f32 #388=(1,512,1,1)f32 #406=(1,512,32,32)f32
nn.LeakyReLU stylegan_decoder.style_convs.4.activate 1 1 406 407 negative_slope=2.000000e-01 #406=(1,512,32,32)f32 #407=(1,512,32,32)f32
torch.split torch.split_128 1 2 407 408 409 dim=1 split_size_or_sections=256 $tensor=407 #407=(1,512,32,32)f32 #408=(1,256,32,32)f32 #409=(1,256,32,32)f32
pnnx.Expression pnnx_expr_1353 3 1 409 138 143 410 expr=add(mul(@0,@1),@2) #409=(1,256,32,32)f32 #138=(1,256,32,32)f32 #143=(1,256,32,32)f32 #410=(1,256,32,32)f32
pnnx.Expression pnnx_expr_1346 0 1 411 expr=None
pnnx.Expression pnnx_expr_1345 0 1 412 expr=1.000000e+00
pnnx.Expression pnnx_expr_1344 0 1 413 expr=0.000000e+00
pnnx.Attribute stylegan_decoder.style_convs.5 0 1 414 @bias=(1,512,1,1)f32 #414=(1,512,1,1)f32
pnnx.Attribute stylegan_decoder.style_convs.5 0 1 415 @weight=(1)f32 #415=(1)f32
pnnx.Attribute stylegan_decoder.style_convs.5.modulated_conv 0 1 416 @weight=(1,512,512,3,3)f32 #416=(1,512,512,3,3)f32
torch.cat torch.cat_106 2 1 408 410 417 dim=1 #408=(1,256,32,32)f32 #410=(1,256,32,32)f32 #417=(1,512,32,32)f32
Tensor.select Tensor.select_90 1 1 233 418 dim=1 index=6 $input=233 #233=(1,16,512)f32 #418=(1,512)f32
nn.Linear stylegan_decoder.style_convs.5.modulated_conv.modulation 1 1 418 419 bias=True in_features=512 out_features=512 @bias=(512)f32 @weight=(512,512)f32 #418=(1,512)f32 #419=(1,512)f32
Tensor.view Tensor.view_200 1 1 419 420 shape=(1,1,512,1,1) $input=419 #419=(1,512)f32 #420=(1,1,512,1,1)f32
pnnx.Expression pnnx_expr_1318 2 1 416 420 421 expr=mul(@0,@1) #416=(1,512,512,3,3)f32 #420=(1,1,512,1,1)f32 #421=(1,512,512,3,3)f32
pnnx.Expression pnnx_expr_1317 1 1 421 422 expr=pow(@0,2) #421=(1,512,512,3,3)f32 #422=(1,512,512,3,3)f32
torch.sum torch.sum_117 1 1 422 423 dim=(2,3,4) keepdim=False $input=422 #422=(1,512,512,3,3)f32 #423=(1,512)f32
pnnx.Expression pnnx_expr_1312 1 1 423 424 expr=rsqrt(add(@0,1.000000e-08)) #423=(1,512)f32 #424=(1,512)f32
Tensor.view Tensor.view_201 1 1 424 425 shape=(1,512,1,1,1) $input=424 #424=(1,512)f32 #425=(1,512,1,1,1)f32
pnnx.Expression pnnx_expr_1307 2 1 421 425 426 expr=mul(@0,@1) #421=(1,512,512,3,3)f32 #425=(1,512,1,1,1)f32 #426=(1,512,512,3,3)f32
Tensor.view Tensor.view_202 1 1 426 427 shape=(512,512,3,3) $input=426 #426=(1,512,512,3,3)f32 #427=(512,512,3,3)f32
F.conv2d F.conv2d_24 2 1 417 427 428 bias=None dilation=(1,1) groups=1 padding=(1,1) stride=(1,1) $input=417 $weight=427 #417=(1,512,32,32)f32 #427=(512,512,3,3)f32 #428=(1,512,32,32)f32
pnnx.Expression pnnx_expr_1261 1 1 428 429 expr=mul(@0,1.414214e+00) #428=(1,512,32,32)f32 #429=(1,512,32,32)f32
Tensor.new_empty Tensor.new_empty_6 1 1 429 430 size=(1,1,32,32) $input=429 #429=(1,512,32,32)f32 #430=(1,1,32,32)f32
aten::normal_ pnnx_1565 4 1 430 413 412 411 431 #430=(1,1,32,32)f32 #431=(1,1,32,32)f32
pnnx.Expression pnnx_expr_1246 4 1 429 415 431 414 432 expr=add(add(@0,mul(@1,@2)),@3) #429=(1,512,32,32)f32 #415=(1)f32 #431=(1,1,32,32)f32 #414=(1,512,1,1)f32 #432=(1,512,32,32)f32
nn.LeakyReLU stylegan_decoder.style_convs.5.activate 1 1 432 433 negative_slope=2.000000e-01 #432=(1,512,32,32)f32 #433=(1,512,32,32)f32
pnnx.Attribute stylegan_decoder.to_rgbs.2 0 1 434 @bias=(1,3,1,1)f32 #434=(1,3,1,1)f32
pnnx.Attribute stylegan_decoder.to_rgbs.2.modulated_conv 0 1 435 @weight=(1,3,512,1,1)f32 #435=(1,3,512,1,1)f32
Tensor.select Tensor.select_91 1 1 233 436 dim=1 index=7 $input=233 #233=(1,16,512)f32 #436=(1,512)f32
nn.Linear stylegan_decoder.to_rgbs.2.modulated_conv.modulation 1 1 436 437 bias=True in_features=512 out_features=512 @bias=(512)f32 @weight=(512,512)f32 #436=(1,512)f32 #437=(1,512)f32
Tensor.view Tensor.view_205 1 1 437 438 shape=(1,1,512,1,1) $input=437 #437=(1,512)f32 #438=(1,1,512,1,1)f32
pnnx.Expression pnnx_expr_1221 2 1 435 438 439 expr=mul(@0,@1) #435=(1,3,512,1,1)f32 #438=(1,1,512,1,1)f32 #439=(1,3,512,1,1)f32
Tensor.view Tensor.view_206 1 1 439 440 shape=(3,512,1,1) $input=439 #439=(1,3,512,1,1)f32 #440=(3,512,1,1)f32
F.conv2d F.conv2d_25 2 1 433 440 441 bias=None dilation=(1,1) groups=1 padding=(0,0) stride=(1,1) $input=433 $weight=440 #433=(1,512,32,32)f32 #440=(3,512,1,1)f32 #441=(1,3,32,32)f32
F.upsample F.upsample_71 1 1 384 442 align_corners=False mode=bilinear scale_factor=(2.000000e+00,2.000000e+00) $input=384 #384=(1,3,16,16)f32 #442=(1,3,32,32)f32
pnnx.Expression pnnx_expr_1174 3 1 441 434 442 443 expr=add(add(@0,@1),@2) #441=(1,3,32,32)f32 #434=(1,3,1,1)f32 #442=(1,3,32,32)f32 #443=(1,3,32,32)f32
pnnx.Expression pnnx_expr_1167 0 1 444 expr=None
pnnx.Expression pnnx_expr_1166 0 1 445 expr=1.000000e+00
pnnx.Expression pnnx_expr_1165 0 1 446 expr=0.000000e+00
pnnx.Attribute stylegan_decoder.style_convs.6 0 1 447 @bias=(1,512,1,1)f32 #447=(1,512,1,1)f32
pnnx.Attribute stylegan_decoder.style_convs.6 0 1 448 @weight=(1)f32 #448=(1)f32
pnnx.Attribute stylegan_decoder.style_convs.6.modulated_conv 0 1 449 @weight=(1,512,512,3,3)f32 #449=(1,512,512,3,3)f32
Tensor.select Tensor.select_92 1 1 233 450 dim=1 index=7 $input=233 #233=(1,16,512)f32 #450=(1,512)f32
nn.Linear stylegan_decoder.style_convs.6.modulated_conv.modulation 1 1 450 451 bias=True in_features=512 out_features=512 @bias=(512)f32 @weight=(512,512)f32 #450=(1,512)f32 #451=(1,512)f32
Tensor.view Tensor.view_209 1 1 451 452 shape=(1,1,512,1,1) $input=451 #451=(1,512)f32 #452=(1,1,512,1,1)f32
pnnx.Expression pnnx_expr_1138 2 1 449 452 453 expr=mul(@0,@1) #449=(1,512,512,3,3)f32 #452=(1,1,512,1,1)f32 #453=(1,512,512,3,3)f32
pnnx.Expression pnnx_expr_1137 1 1 453 454 expr=pow(@0,2) #453=(1,512,512,3,3)f32 #454=(1,512,512,3,3)f32
torch.sum torch.sum_118 1 1 454 455 dim=(2,3,4) keepdim=False $input=454 #454=(1,512,512,3,3)f32 #455=(1,512)f32
pnnx.Expression pnnx_expr_1132 1 1 455 456 expr=rsqrt(add(@0,1.000000e-08)) #455=(1,512)f32 #456=(1,512)f32
Tensor.view Tensor.view_210 1 1 456 457 shape=(1,512,1,1,1) $input=456 #456=(1,512)f32 #457=(1,512,1,1,1)f32
pnnx.Expression pnnx_expr_1127 2 1 453 457 458 expr=mul(@0,@1) #453=(1,512,512,3,3)f32 #457=(1,512,1,1,1)f32 #458=(1,512,512,3,3)f32
F.upsample F.upsample_72 1 1 433 459 align_corners=False mode=bilinear scale_factor=(2.000000e+00,2.000000e+00) $input=433 #433=(1,512,32,32)f32 #459=(1,512,64,64)f32
Tensor.view Tensor.view_211 1 1 458 460 shape=(512,512,3,3) $input=458 #458=(1,512,512,3,3)f32 #460=(512,512,3,3)f32
F.conv2d F.conv2d_26 2 1 459 460 461 bias=None dilation=(1,1) groups=1 padding=(1,1) stride=(1,1) $input=459 $weight=460 #459=(1,512,64,64)f32 #460=(512,512,3,3)f32 #461=(1,512,64,64)f32
pnnx.Expression pnnx_expr_1079 1 1 461 462 expr=mul(@0,1.414214e+00) #461=(1,512,64,64)f32 #462=(1,512,64,64)f32
Tensor.new_empty Tensor.new_empty_7 1 1 462 463 size=(1,1,64,64) $input=462 #462=(1,512,64,64)f32 #463=(1,1,64,64)f32
aten::normal_ pnnx_1793 4 1 463 446 445 444 464 #463=(1,1,64,64)f32 #464=(1,1,64,64)f32
pnnx.Expression pnnx_expr_1064 4 1 462 448 464 447 465 expr=add(add(@0,mul(@1,@2)),@3) #462=(1,512,64,64)f32 #448=(1)f32 #464=(1,1,64,64)f32 #447=(1,512,1,1)f32 #465=(1,512,64,64)f32
nn.LeakyReLU stylegan_decoder.style_convs.6.activate 1 1 465 466 negative_slope=2.000000e-01 #465=(1,512,64,64)f32 #466=(1,512,64,64)f32
torch.split torch.split_129 1 2 466 467 468 dim=1 split_size_or_sections=256 $tensor=466 #466=(1,512,64,64)f32 #467=(1,256,64,64)f32 #468=(1,256,64,64)f32
pnnx.Expression pnnx_expr_1059 3 1 468 160 165 469 expr=add(mul(@0,@1),@2) #468=(1,256,64,64)f32 #160=(1,256,64,64)f32 #165=(1,256,64,64)f32 #469=(1,256,64,64)f32
pnnx.Expression pnnx_expr_1052 0 1 470 expr=None
pnnx.Expression pnnx_expr_1051 0 1 471 expr=1.000000e+00
pnnx.Expression pnnx_expr_1050 0 1 472 expr=0.000000e+00
pnnx.Attribute stylegan_decoder.style_convs.7 0 1 473 @bias=(1,512,1,1)f32 #473=(1,512,1,1)f32
pnnx.Attribute stylegan_decoder.style_convs.7 0 1 474 @weight=(1)f32 #474=(1)f32
pnnx.Attribute stylegan_decoder.style_convs.7.modulated_conv 0 1 475 @weight=(1,512,512,3,3)f32 #475=(1,512,512,3,3)f32
torch.cat torch.cat_107 2 1 467 469 476 dim=1 #467=(1,256,64,64)f32 #469=(1,256,64,64)f32 #476=(1,512,64,64)f32
Tensor.select Tensor.select_93 1 1 233 477 dim=1 index=8 $input=233 #233=(1,16,512)f32 #477=(1,512)f32
nn.Linear stylegan_decoder.style_convs.7.modulated_conv.modulation 1 1 477 478 bias=True in_features=512 out_features=512 @bias=(512)f32 @weight=(512,512)f32 #477=(1,512)f32 #478=(1,512)f32
Tensor.view Tensor.view_214 1 1 478 479 shape=(1,1,512,1,1) $input=478 #478=(1,512)f32 #479=(1,1,512,1,1)f32
pnnx.Expression pnnx_expr_1024 2 1 475 479 480 expr=mul(@0,@1) #475=(1,512,512,3,3)f32 #479=(1,1,512,1,1)f32 #480=(1,512,512,3,3)f32
pnnx.Expression pnnx_expr_1023 1 1 480 481 expr=pow(@0,2) #480=(1,512,512,3,3)f32 #481=(1,512,512,3,3)f32
torch.sum torch.sum_119 1 1 481 482 dim=(2,3,4) keepdim=False $input=481 #481=(1,512,512,3,3)f32 #482=(1,512)f32
pnnx.Expression pnnx_expr_1018 1 1 482 483 expr=rsqrt(add(@0,1.000000e-08)) #482=(1,512)f32 #483=(1,512)f32
Tensor.view Tensor.view_215 1 1 483 484 shape=(1,512,1,1,1) $input=483 #483=(1,512)f32 #484=(1,512,1,1,1)f32
pnnx.Expression pnnx_expr_1013 2 1 480 484 485 expr=mul(@0,@1) #480=(1,512,512,3,3)f32 #484=(1,512,1,1,1)f32 #485=(1,512,512,3,3)f32
Tensor.view Tensor.view_216 1 1 485 486 shape=(512,512,3,3) $input=485 #485=(1,512,512,3,3)f32 #486=(512,512,3,3)f32
F.conv2d F.conv2d_27 2 1 476 486 487 bias=None dilation=(1,1) groups=1 padding=(1,1) stride=(1,1) $input=476 $weight=486 #476=(1,512,64,64)f32 #486=(512,512,3,3)f32 #487=(1,512,64,64)f32
pnnx.Expression pnnx_expr_967 1 1 487 488 expr=mul(@0,1.414214e+00) #487=(1,512,64,64)f32 #488=(1,512,64,64)f32
Tensor.new_empty Tensor.new_empty_8 1 1 488 489 size=(1,1,64,64) $input=488 #488=(1,512,64,64)f32 #489=(1,1,64,64)f32
aten::normal_ pnnx_1934 4 1 489 472 471 470 490 #489=(1,1,64,64)f32 #490=(1,1,64,64)f32
pnnx.Expression pnnx_expr_952 4 1 488 474 490 473 491 expr=add(add(@0,mul(@1,@2)),@3) #488=(1,512,64,64)f32 #474=(1)f32 #490=(1,1,64,64)f32 #473=(1,512,1,1)f32 #491=(1,512,64,64)f32
nn.LeakyReLU stylegan_decoder.style_convs.7.activate 1 1 491 492 negative_slope=2.000000e-01 #491=(1,512,64,64)f32 #492=(1,512,64,64)f32
pnnx.Attribute stylegan_decoder.to_rgbs.3 0 1 493 @bias=(1,3,1,1)f32 #493=(1,3,1,1)f32
pnnx.Attribute stylegan_decoder.to_rgbs.3.modulated_conv 0 1 494 @weight=(1,3,512,1,1)f32 #494=(1,3,512,1,1)f32
Tensor.select Tensor.select_94 1 1 233 495 dim=1 index=9 $input=233 #233=(1,16,512)f32 #495=(1,512)f32
nn.Linear stylegan_decoder.to_rgbs.3.modulated_conv.modulation 1 1 495 496 bias=True in_features=512 out_features=512 @bias=(512)f32 @weight=(512,512)f32 #495=(1,512)f32 #496=(1,512)f32
Tensor.view Tensor.view_219 1 1 496 497 shape=(1,1,512,1,1) $input=496 #496=(1,512)f32 #497=(1,1,512,1,1)f32
pnnx.Expression pnnx_expr_927 2 1 494 497 498 expr=mul(@0,@1) #494=(1,3,512,1,1)f32 #497=(1,1,512,1,1)f32 #498=(1,3,512,1,1)f32
Tensor.view Tensor.view_220 1 1 498 499 shape=(3,512,1,1) $input=498 #498=(1,3,512,1,1)f32 #499=(3,512,1,1)f32
F.conv2d F.conv2d_28 2 1 492 499 500 bias=None dilation=(1,1) groups=1 padding=(0,0) stride=(1,1) $input=492 $weight=499 #492=(1,512,64,64)f32 #499=(3,512,1,1)f32 #500=(1,3,64,64)f32
F.upsample F.upsample_73 1 1 443 501 align_corners=False mode=bilinear scale_factor=(2.000000e+00,2.000000e+00) $input=443 #443=(1,3,32,32)f32 #501=(1,3,64,64)f32
pnnx.Expression pnnx_expr_880 3 1 500 493 501 502 expr=add(add(@0,@1),@2) #500=(1,3,64,64)f32 #493=(1,3,1,1)f32 #501=(1,3,64,64)f32 #502=(1,3,64,64)f32
pnnx.Expression pnnx_expr_873 0 1 503 expr=None
pnnx.Expression pnnx_expr_872 0 1 504 expr=1.000000e+00
pnnx.Expression pnnx_expr_871 0 1 505 expr=0.000000e+00
pnnx.Attribute stylegan_decoder.style_convs.8 0 1 506 @bias=(1,256,1,1)f32 #506=(1,256,1,1)f32
pnnx.Attribute stylegan_decoder.style_convs.8 0 1 507 @weight=(1)f32 #507=(1)f32
pnnx.Attribute stylegan_decoder.style_convs.8.modulated_conv 0 1 508 @weight=(1,256,512,3,3)f32 #508=(1,256,512,3,3)f32
Tensor.select Tensor.select_95 1 1 233 509 dim=1 index=9 $input=233 #233=(1,16,512)f32 #509=(1,512)f32
nn.Linear stylegan_decoder.style_convs.8.modulated_conv.modulation 1 1 509 510 bias=True in_features=512 out_features=512 @bias=(512)f32 @weight=(512,512)f32 #509=(1,512)f32 #510=(1,512)f32
Tensor.view Tensor.view_223 1 1 510 511 shape=(1,1,512,1,1) $input=510 #510=(1,512)f32 #511=(1,1,512,1,1)f32
pnnx.Expression pnnx_expr_844 2 1 508 511 512 expr=mul(@0,@1) #508=(1,256,512,3,3)f32 #511=(1,1,512,1,1)f32 #512=(1,256,512,3,3)f32
pnnx.Expression pnnx_expr_843 1 1 512 513 expr=pow(@0,2) #512=(1,256,512,3,3)f32 #513=(1,256,512,3,3)f32
torch.sum torch.sum_120 1 1 513 514 dim=(2,3,4) keepdim=False $input=513 #513=(1,256,512,3,3)f32 #514=(1,256)f32
pnnx.Expression pnnx_expr_838 1 1 514 515 expr=rsqrt(add(@0,1.000000e-08)) #514=(1,256)f32 #515=(1,256)f32
Tensor.view Tensor.view_224 1 1 515 516 shape=(1,256,1,1,1) $input=515 #515=(1,256)f32 #516=(1,256,1,1,1)f32
pnnx.Expression pnnx_expr_833 2 1 512 516 517 expr=mul(@0,@1) #512=(1,256,512,3,3)f32 #516=(1,256,1,1,1)f32 #517=(1,256,512,3,3)f32
F.upsample F.upsample_74 1 1 492 518 align_corners=False mode=bilinear scale_factor=(2.000000e+00,2.000000e+00) $input=492 #492=(1,512,64,64)f32 #518=(1,512,128,128)f32
Tensor.view Tensor.view_225 1 1 517 519 shape=(256,512,3,3) $input=517 #517=(1,256,512,3,3)f32 #519=(256,512,3,3)f32
F.conv2d F.conv2d_29 2 1 518 519 520 bias=None dilation=(1,1) groups=1 padding=(1,1) stride=(1,1) $input=518 $weight=519 #518=(1,512,128,128)f32 #519=(256,512,3,3)f32 #520=(1,256,128,128)f32
pnnx.Expression pnnx_expr_785 1 1 520 521 expr=mul(@0,1.414214e+00) #520=(1,256,128,128)f32 #521=(1,256,128,128)f32
Tensor.new_empty Tensor.new_empty_9 1 1 521 522 size=(1,1,128,128) $input=521 #521=(1,256,128,128)f32 #522=(1,1,128,128)f32
aten::normal_ pnnx_2162 4 1 522 505 504 503 523 #522=(1,1,128,128)f32 #523=(1,1,128,128)f32
pnnx.Expression pnnx_expr_770 4 1 521 507 523 506 524 expr=add(add(@0,mul(@1,@2)),@3) #521=(1,256,128,128)f32 #507=(1)f32 #523=(1,1,128,128)f32 #506=(1,256,1,1)f32 #524=(1,256,128,128)f32
nn.LeakyReLU stylegan_decoder.style_convs.8.activate 1 1 524 525 negative_slope=2.000000e-01 #524=(1,256,128,128)f32 #525=(1,256,128,128)f32
torch.split torch.split_130 1 2 525 526 527 dim=1 split_size_or_sections=128 $tensor=525 #525=(1,256,128,128)f32 #526=(1,128,128,128)f32 #527=(1,128,128,128)f32
pnnx.Expression pnnx_expr_766 3 1 527 182 187 528 expr=add(mul(@0,@1),@2) #527=(1,128,128,128)f32 #182=(1,128,128,128)f32 #187=(1,128,128,128)f32 #528=(1,128,128,128)f32
pnnx.Expression pnnx_expr_759 0 1 529 expr=None
pnnx.Expression pnnx_expr_758 0 1 530 expr=1.000000e+00
pnnx.Expression pnnx_expr_757 0 1 531 expr=0.000000e+00
pnnx.Attribute stylegan_decoder.style_convs.9 0 1 532 @bias=(1,256,1,1)f32 #532=(1,256,1,1)f32
pnnx.Attribute stylegan_decoder.style_convs.9 0 1 533 @weight=(1)f32 #533=(1)f32
pnnx.Attribute stylegan_decoder.style_convs.9.modulated_conv 0 1 534 @weight=(1,256,256,3,3)f32 #534=(1,256,256,3,3)f32
torch.cat torch.cat_108 2 1 526 528 535 dim=1 #526=(1,128,128,128)f32 #528=(1,128,128,128)f32 #535=(1,256,128,128)f32
Tensor.select Tensor.select_96 1 1 233 536 dim=1 index=10 $input=233 #233=(1,16,512)f32 #536=(1,512)f32
nn.Linear stylegan_decoder.style_convs.9.modulated_conv.modulation 1 1 536 537 bias=True in_features=512 out_features=256 @bias=(256)f32 @weight=(256,512)f32 #536=(1,512)f32 #537=(1,256)f32
Tensor.view Tensor.view_228 1 1 537 538 shape=(1,1,256,1,1) $input=537 #537=(1,256)f32 #538=(1,1,256,1,1)f32
pnnx.Expression pnnx_expr_731 2 1 534 538 539 expr=mul(@0,@1) #534=(1,256,256,3,3)f32 #538=(1,1,256,1,1)f32 #539=(1,256,256,3,3)f32
pnnx.Expression pnnx_expr_730 1 1 539 540 expr=pow(@0,2) #539=(1,256,256,3,3)f32 #540=(1,256,256,3,3)f32
torch.sum torch.sum_121 1 1 540 541 dim=(2,3,4) keepdim=False $input=540 #540=(1,256,256,3,3)f32 #541=(1,256)f32
pnnx.Expression pnnx_expr_725 1 1 541 542 expr=rsqrt(add(@0,1.000000e-08)) #541=(1,256)f32 #542=(1,256)f32
Tensor.view Tensor.view_229 1 1 542 543 shape=(1,256,1,1,1) $input=542 #542=(1,256)f32 #543=(1,256,1,1,1)f32
pnnx.Expression pnnx_expr_720 2 1 539 543 544 expr=mul(@0,@1) #539=(1,256,256,3,3)f32 #543=(1,256,1,1,1)f32 #544=(1,256,256,3,3)f32
Tensor.view Tensor.view_230 1 1 544 545 shape=(256,256,3,3) $input=544 #544=(1,256,256,3,3)f32 #545=(256,256,3,3)f32
F.conv2d F.conv2d_30 2 1 535 545 546 bias=None dilation=(1,1) groups=1 padding=(1,1) stride=(1,1) $input=535 $weight=545 #535=(1,256,128,128)f32 #545=(256,256,3,3)f32 #546=(1,256,128,128)f32
pnnx.Expression pnnx_expr_674 1 1 546 547 expr=mul(@0,1.414214e+00) #546=(1,256,128,128)f32 #547=(1,256,128,128)f32
Tensor.new_empty Tensor.new_empty_10 1 1 547 548 size=(1,1,128,128) $input=547 #547=(1,256,128,128)f32 #548=(1,1,128,128)f32
aten::normal_ pnnx_2302 4 1 548 531 530 529 549 #548=(1,1,128,128)f32 #549=(1,1,128,128)f32
pnnx.Expression pnnx_expr_659 4 1 547 533 549 532 550 expr=add(add(@0,mul(@1,@2)),@3) #547=(1,256,128,128)f32 #533=(1)f32 #549=(1,1,128,128)f32 #532=(1,256,1,1)f32 #550=(1,256,128,128)f32
nn.LeakyReLU stylegan_decoder.style_convs.9.activate 1 1 550 551 negative_slope=2.000000e-01 #550=(1,256,128,128)f32 #551=(1,256,128,128)f32
pnnx.Attribute stylegan_decoder.to_rgbs.4 0 1 552 @bias=(1,3,1,1)f32 #552=(1,3,1,1)f32
pnnx.Attribute stylegan_decoder.to_rgbs.4.modulated_conv 0 1 553 @weight=(1,3,256,1,1)f32 #553=(1,3,256,1,1)f32
Tensor.select Tensor.select_97 1 1 233 554 dim=1 index=11 $input=233 #233=(1,16,512)f32 #554=(1,512)f32
nn.Linear stylegan_decoder.to_rgbs.4.modulated_conv.modulation 1 1 554 555 bias=True in_features=512 out_features=256 @bias=(256)f32 @weight=(256,512)f32 #554=(1,512)f32 #555=(1,256)f32
Tensor.view Tensor.view_233 1 1 555 556 shape=(1,1,256,1,1) $input=555 #555=(1,256)f32 #556=(1,1,256,1,1)f32
pnnx.Expression pnnx_expr_634 2 1 553 556 557 expr=mul(@0,@1) #553=(1,3,256,1,1)f32 #556=(1,1,256,1,1)f32 #557=(1,3,256,1,1)f32
Tensor.view Tensor.view_234 1 1 557 558 shape=(3,256,1,1) $input=557 #557=(1,3,256,1,1)f32 #558=(3,256,1,1)f32
F.conv2d F.conv2d_31 2 1 551 558 559 bias=None dilation=(1,1) groups=1 padding=(0,0) stride=(1,1) $input=551 $weight=558 #551=(1,256,128,128)f32 #558=(3,256,1,1)f32 #559=(1,3,128,128)f32
F.upsample F.upsample_75 1 1 502 560 align_corners=False mode=bilinear scale_factor=(2.000000e+00,2.000000e+00) $input=502 #502=(1,3,64,64)f32 #560=(1,3,128,128)f32
pnnx.Expression pnnx_expr_587 3 1 559 552 560 561 expr=add(add(@0,@1),@2) #559=(1,3,128,128)f32 #552=(1,3,1,1)f32 #560=(1,3,128,128)f32 #561=(1,3,128,128)f32
pnnx.Expression pnnx_expr_580 0 1 562 expr=None
pnnx.Expression pnnx_expr_579 0 1 563 expr=1.000000e+00
pnnx.Expression pnnx_expr_578 0 1 564 expr=0.000000e+00
pnnx.Attribute stylegan_decoder.style_convs.10 0 1 565 @bias=(1,128,1,1)f32 #565=(1,128,1,1)f32
pnnx.Attribute stylegan_decoder.style_convs.10 0 1 566 @weight=(1)f32 #566=(1)f32
pnnx.Attribute stylegan_decoder.style_convs.10.modulated_conv 0 1 567 @weight=(1,128,256,3,3)f32 #567=(1,128,256,3,3)f32
Tensor.select Tensor.select_98 1 1 233 568 dim=1 index=11 $input=233 #233=(1,16,512)f32 #568=(1,512)f32
nn.Linear stylegan_decoder.style_convs.10.modulated_conv.modulation 1 1 568 569 bias=True in_features=512 out_features=256 @bias=(256)f32 @weight=(256,512)f32 #568=(1,512)f32 #569=(1,256)f32
Tensor.view Tensor.view_237 1 1 569 570 shape=(1,1,256,1,1) $input=569 #569=(1,256)f32 #570=(1,1,256,1,1)f32
pnnx.Expression pnnx_expr_551 2 1 567 570 571 expr=mul(@0,@1) #567=(1,128,256,3,3)f32 #570=(1,1,256,1,1)f32 #571=(1,128,256,3,3)f32
pnnx.Expression pnnx_expr_550 1 1 571 572 expr=pow(@0,2) #571=(1,128,256,3,3)f32 #572=(1,128,256,3,3)f32
torch.sum torch.sum_122 1 1 572 573 dim=(2,3,4) keepdim=False $input=572 #572=(1,128,256,3,3)f32 #573=(1,128)f32
pnnx.Expression pnnx_expr_545 1 1 573 574 expr=rsqrt(add(@0,1.000000e-08)) #573=(1,128)f32 #574=(1,128)f32
Tensor.view Tensor.view_238 1 1 574 575 shape=(1,128,1,1,1) $input=574 #574=(1,128)f32 #575=(1,128,1,1,1)f32
pnnx.Expression pnnx_expr_540 2 1 571 575 576 expr=mul(@0,@1) #571=(1,128,256,3,3)f32 #575=(1,128,1,1,1)f32 #576=(1,128,256,3,3)f32
F.upsample F.upsample_76 1 1 551 577 align_corners=False mode=bilinear scale_factor=(2.000000e+00,2.000000e+00) $input=551 #551=(1,256,128,128)f32 #577=(1,256,256,256)f32
Tensor.view Tensor.view_239 1 1 576 578 shape=(128,256,3,3) $input=576 #576=(1,128,256,3,3)f32 #578=(128,256,3,3)f32
F.conv2d F.conv2d_32 2 1 577 578 579 bias=None dilation=(1,1) groups=1 padding=(1,1) stride=(1,1) $input=577 $weight=578 #577=(1,256,256,256)f32 #578=(128,256,3,3)f32 #579=(1,128,256,256)f32
pnnx.Expression pnnx_expr_492 1 1 579 580 expr=mul(@0,1.414214e+00) #579=(1,128,256,256)f32 #580=(1,128,256,256)f32
Tensor.new_empty Tensor.new_empty_11 1 1 580 581 size=(1,1,256,256) $input=580 #580=(1,128,256,256)f32 #581=(1,1,256,256)f32
aten::normal_ pnnx_2530 4 1 581 564 563 562 582 #581=(1,1,256,256)f32 #582=(1,1,256,256)f32
pnnx.Expression pnnx_expr_477 4 1 580 566 582 565 583 expr=add(add(@0,mul(@1,@2)),@3) #580=(1,128,256,256)f32 #566=(1)f32 #582=(1,1,256,256)f32 #565=(1,128,1,1)f32 #583=(1,128,256,256)f32
nn.LeakyReLU stylegan_decoder.style_convs.10.activate 1 1 583 584 negative_slope=2.000000e-01 #583=(1,128,256,256)f32 #584=(1,128,256,256)f32
torch.split torch.split_131 1 2 584 585 586 dim=1 split_size_or_sections=64 $tensor=584 #584=(1,128,256,256)f32 #585=(1,64,256,256)f32 #586=(1,64,256,256)f32
pnnx.Expression pnnx_expr_473 3 1 586 204 209 587 expr=add(mul(@0,@1),@2) #586=(1,64,256,256)f32 #204=(1,64,256,256)f32 #209=(1,64,256,256)f32 #587=(1,64,256,256)f32
pnnx.Expression pnnx_expr_466 0 1 588 expr=None
pnnx.Expression pnnx_expr_465 0 1 589 expr=1.000000e+00
pnnx.Expression pnnx_expr_464 0 1 590 expr=0.000000e+00
pnnx.Attribute stylegan_decoder.style_convs.11 0 1 591 @bias=(1,128,1,1)f32 #591=(1,128,1,1)f32
pnnx.Attribute stylegan_decoder.style_convs.11 0 1 592 @weight=(1)f32 #592=(1)f32
pnnx.Attribute stylegan_decoder.style_convs.11.modulated_conv 0 1 593 @weight=(1,128,128,3,3)f32 #593=(1,128,128,3,3)f32
torch.cat torch.cat_109 2 1 585 587 594 dim=1 #585=(1,64,256,256)f32 #587=(1,64,256,256)f32 #594=(1,128,256,256)f32
Tensor.select Tensor.select_99 1 1 233 595 dim=1 index=12 $input=233 #233=(1,16,512)f32 #595=(1,512)f32
nn.Linear stylegan_decoder.style_convs.11.modulated_conv.modulation 1 1 595 596 bias=True in_features=512 out_features=128 @bias=(128)f32 @weight=(128,512)f32 #595=(1,512)f32 #596=(1,128)f32
Tensor.view Tensor.view_242 1 1 596 597 shape=(1,1,128,1,1) $input=596 #596=(1,128)f32 #597=(1,1,128,1,1)f32
pnnx.Expression pnnx_expr_438 2 1 593 597 598 expr=mul(@0,@1) #593=(1,128,128,3,3)f32 #597=(1,1,128,1,1)f32 #598=(1,128,128,3,3)f32
pnnx.Expression pnnx_expr_437 1 1 598 599 expr=pow(@0,2) #598=(1,128,128,3,3)f32 #599=(1,128,128,3,3)f32
torch.sum torch.sum_123 1 1 599 600 dim=(2,3,4) keepdim=False $input=599 #599=(1,128,128,3,3)f32 #600=(1,128)f32
pnnx.Expression pnnx_expr_432 1 1 600 601 expr=rsqrt(add(@0,1.000000e-08)) #600=(1,128)f32 #601=(1,128)f32
Tensor.view Tensor.view_243 1 1 601 602 shape=(1,128,1,1,1) $input=601 #601=(1,128)f32 #602=(1,128,1,1,1)f32
pnnx.Expression pnnx_expr_427 2 1 598 602 603 expr=mul(@0,@1) #598=(1,128,128,3,3)f32 #602=(1,128,1,1,1)f32 #603=(1,128,128,3,3)f32
Tensor.view Tensor.view_244 1 1 603 604 shape=(128,128,3,3) $input=603 #603=(1,128,128,3,3)f32 #604=(128,128,3,3)f32
F.conv2d F.conv2d_33 2 1 594 604 605 bias=None dilation=(1,1) groups=1 padding=(1,1) stride=(1,1) $input=594 $weight=604 #594=(1,128,256,256)f32 #604=(128,128,3,3)f32 #605=(1,128,256,256)f32
pnnx.Expression pnnx_expr_381 1 1 605 606 expr=mul(@0,1.414214e+00) #605=(1,128,256,256)f32 #606=(1,128,256,256)f32
Tensor.new_empty Tensor.new_empty_12 1 1 606 607 size=(1,1,256,256) $input=606 #606=(1,128,256,256)f32 #607=(1,1,256,256)f32
aten::normal_ pnnx_2670 4 1 607 590 589 588 608 #607=(1,1,256,256)f32 #608=(1,1,256,256)f32
pnnx.Expression pnnx_expr_366 4 1 606 592 608 591 609 expr=add(add(@0,mul(@1,@2)),@3) #606=(1,128,256,256)f32 #592=(1)f32 #608=(1,1,256,256)f32 #591=(1,128,1,1)f32 #609=(1,128,256,256)f32
nn.LeakyReLU stylegan_decoder.style_convs.11.activate 1 1 609 610 negative_slope=2.000000e-01 #609=(1,128,256,256)f32 #610=(1,128,256,256)f32
pnnx.Attribute stylegan_decoder.to_rgbs.5 0 1 611 @bias=(1,3,1,1)f32 #611=(1,3,1,1)f32
pnnx.Attribute stylegan_decoder.to_rgbs.5.modulated_conv 0 1 612 @weight=(1,3,128,1,1)f32 #612=(1,3,128,1,1)f32
Tensor.select Tensor.select_100 1 1 233 613 dim=1 index=13 $input=233 #233=(1,16,512)f32 #613=(1,512)f32
nn.Linear stylegan_decoder.to_rgbs.5.modulated_conv.modulation 1 1 613 614 bias=True in_features=512 out_features=128 @bias=(128)f32 @weight=(128,512)f32 #613=(1,512)f32 #614=(1,128)f32
Tensor.view Tensor.view_247 1 1 614 615 shape=(1,1,128,1,1) $input=614 #614=(1,128)f32 #615=(1,1,128,1,1)f32
pnnx.Expression pnnx_expr_341 2 1 612 615 616 expr=mul(@0,@1) #612=(1,3,128,1,1)f32 #615=(1,1,128,1,1)f32 #616=(1,3,128,1,1)f32
Tensor.view Tensor.view_248 1 1 616 617 shape=(3,128,1,1) $input=616 #616=(1,3,128,1,1)f32 #617=(3,128,1,1)f32
F.conv2d F.conv2d_34 2 1 610 617 618 bias=None dilation=(1,1) groups=1 padding=(0,0) stride=(1,1) $input=610 $weight=617 #610=(1,128,256,256)f32 #617=(3,128,1,1)f32 #618=(1,3,256,256)f32
F.upsample F.upsample_77 1 1 561 619 align_corners=False mode=bilinear scale_factor=(2.000000e+00,2.000000e+00) $input=561 #561=(1,3,128,128)f32 #619=(1,3,256,256)f32
pnnx.Expression pnnx_expr_294 3 1 618 611 619 620 expr=add(add(@0,@1),@2) #618=(1,3,256,256)f32 #611=(1,3,1,1)f32 #619=(1,3,256,256)f32 #620=(1,3,256,256)f32
pnnx.Expression pnnx_expr_287 0 1 621 expr=None
pnnx.Expression pnnx_expr_286 0 1 622 expr=1.000000e+00
pnnx.Expression pnnx_expr_285 0 1 623 expr=0.000000e+00
pnnx.Attribute stylegan_decoder.style_convs.12 0 1 624 @bias=(1,64,1,1)f32 #624=(1,64,1,1)f32
pnnx.Attribute stylegan_decoder.style_convs.12 0 1 625 @weight=(1)f32 #625=(1)f32
pnnx.Attribute stylegan_decoder.style_convs.12.modulated_conv 0 1 626 @weight=(1,64,128,3,3)f32 #626=(1,64,128,3,3)f32
Tensor.select Tensor.select_101 1 1 233 627 dim=1 index=13 $input=233 #233=(1,16,512)f32 #627=(1,512)f32
nn.Linear stylegan_decoder.style_convs.12.modulated_conv.modulation 1 1 627 628 bias=True in_features=512 out_features=128 @bias=(128)f32 @weight=(128,512)f32 #627=(1,512)f32 #628=(1,128)f32
Tensor.view Tensor.view_251 1 1 628 629 shape=(1,1,128,1,1) $input=628 #628=(1,128)f32 #629=(1,1,128,1,1)f32
pnnx.Expression pnnx_expr_258 2 1 626 629 630 expr=mul(@0,@1) #626=(1,64,128,3,3)f32 #629=(1,1,128,1,1)f32 #630=(1,64,128,3,3)f32
pnnx.Expression pnnx_expr_257 1 1 630 631 expr=pow(@0,2) #630=(1,64,128,3,3)f32 #631=(1,64,128,3,3)f32
torch.sum torch.sum_124 1 1 631 632 dim=(2,3,4) keepdim=False $input=631 #631=(1,64,128,3,3)f32 #632=(1,64)f32
pnnx.Expression pnnx_expr_252 1 1 632 633 expr=rsqrt(add(@0,1.000000e-08)) #632=(1,64)f32 #633=(1,64)f32
Tensor.view Tensor.view_252 1 1 633 634 shape=(1,64,1,1,1) $input=633 #633=(1,64)f32 #634=(1,64,1,1,1)f32
pnnx.Expression pnnx_expr_247 2 1 630 634 635 expr=mul(@0,@1) #630=(1,64,128,3,3)f32 #634=(1,64,1,1,1)f32 #635=(1,64,128,3,3)f32
F.upsample F.upsample_78 1 1 610 636 align_corners=False mode=bilinear scale_factor=(2.000000e+00,2.000000e+00) $input=610 #610=(1,128,256,256)f32 #636=(1,128,512,512)f32
Tensor.view Tensor.view_253 1 1 635 637 shape=(64,128,3,3) $input=635 #635=(1,64,128,3,3)f32 #637=(64,128,3,3)f32
F.conv2d F.conv2d_35 2 1 636 637 638 bias=None dilation=(1,1) groups=1 padding=(1,1) stride=(1,1) $input=636 $weight=637 #636=(1,128,512,512)f32 #637=(64,128,3,3)f32 #638=(1,64,512,512)f32
pnnx.Expression pnnx_expr_199 1 1 638 639 expr=mul(@0,1.414214e+00) #638=(1,64,512,512)f32 #639=(1,64,512,512)f32
Tensor.new_empty Tensor.new_empty_13 1 1 639 640 size=(1,1,512,512) $input=639 #639=(1,64,512,512)f32 #640=(1,1,512,512)f32
aten::normal_ pnnx_2898 4 1 640 623 622 621 641 #640=(1,1,512,512)f32 #641=(1,1,512,512)f32
pnnx.Expression pnnx_expr_184 4 1 639 625 641 624 642 expr=add(add(@0,mul(@1,@2)),@3) #639=(1,64,512,512)f32 #625=(1)f32 #641=(1,1,512,512)f32 #624=(1,64,1,1)f32 #642=(1,64,512,512)f32
nn.LeakyReLU stylegan_decoder.style_convs.12.activate 1 1 642 643 negative_slope=2.000000e-01 #642=(1,64,512,512)f32 #643=(1,64,512,512)f32
torch.split torch.split_132 1 2 643 644 645 dim=1 split_size_or_sections=32 $tensor=643 #643=(1,64,512,512)f32 #644=(1,32,512,512)f32 #645=(1,32,512,512)f32
pnnx.Expression pnnx_expr_180 3 1 645 226 231 646 expr=add(mul(@0,@1),@2) #645=(1,32,512,512)f32 #226=(1,32,512,512)f32 #231=(1,32,512,512)f32 #646=(1,32,512,512)f32
pnnx.Expression pnnx_expr_173 0 1 647 expr=None
pnnx.Expression pnnx_expr_172 0 1 648 expr=1.000000e+00
pnnx.Expression pnnx_expr_171 0 1 649 expr=0.000000e+00
pnnx.Attribute stylegan_decoder.style_convs.13 0 1 650 @bias=(1,64,1,1)f32 #650=(1,64,1,1)f32
pnnx.Attribute stylegan_decoder.style_convs.13 0 1 651 @weight=(1)f32 #651=(1)f32
pnnx.Attribute stylegan_decoder.style_convs.13.modulated_conv 0 1 652 @weight=(1,64,64,3,3)f32 #652=(1,64,64,3,3)f32
torch.cat torch.cat_110 2 1 644 646 653 dim=1 #644=(1,32,512,512)f32 #646=(1,32,512,512)f32 #653=(1,64,512,512)f32
Tensor.select Tensor.select_102 1 1 233 654 dim=1 index=14 $input=233 #233=(1,16,512)f32 #654=(1,512)f32
nn.Linear stylegan_decoder.style_convs.13.modulated_conv.modulation 1 1 654 655 bias=True in_features=512 out_features=64 @bias=(64)f32 @weight=(64,512)f32 #654=(1,512)f32 #655=(1,64)f32
Tensor.view Tensor.view_256 1 1 655 656 shape=(1,1,64,1,1) $input=655 #655=(1,64)f32 #656=(1,1,64,1,1)f32
pnnx.Expression pnnx_expr_145 2 1 652 656 657 expr=mul(@0,@1) #652=(1,64,64,3,3)f32 #656=(1,1,64,1,1)f32 #657=(1,64,64,3,3)f32
pnnx.Expression pnnx_expr_144 1 1 657 658 expr=pow(@0,2) #657=(1,64,64,3,3)f32 #658=(1,64,64,3,3)f32
torch.sum torch.sum_125 1 1 658 659 dim=(2,3,4) keepdim=False $input=658 #658=(1,64,64,3,3)f32 #659=(1,64)f32
pnnx.Expression pnnx_expr_139 1 1 659 660 expr=rsqrt(add(@0,1.000000e-08)) #659=(1,64)f32 #660=(1,64)f32
Tensor.view Tensor.view_257 1 1 660 661 shape=(1,64,1,1,1) $input=660 #660=(1,64)f32 #661=(1,64,1,1,1)f32
pnnx.Expression pnnx_expr_134 2 1 657 661 662 expr=mul(@0,@1) #657=(1,64,64,3,3)f32 #661=(1,64,1,1,1)f32 #662=(1,64,64,3,3)f32
Tensor.view Tensor.view_258 1 1 662 663 shape=(64,64,3,3) $input=662 #662=(1,64,64,3,3)f32 #663=(64,64,3,3)f32
F.conv2d F.conv2d_36 2 1 653 663 664 bias=None dilation=(1,1) groups=1 padding=(1,1) stride=(1,1) $input=653 $weight=663 #653=(1,64,512,512)f32 #663=(64,64,3,3)f32 #664=(1,64,512,512)f32
pnnx.Expression pnnx_expr_88 1 1 664 665 expr=mul(@0,1.414214e+00) #664=(1,64,512,512)f32 #665=(1,64,512,512)f32
Tensor.new_empty Tensor.new_empty_14 1 1 665 666 size=(1,1,512,512) $input=665 #665=(1,64,512,512)f32 #666=(1,1,512,512)f32
aten::normal_ pnnx_3038 4 1 666 649 648 647 667 #666=(1,1,512,512)f32 #667=(1,1,512,512)f32
pnnx.Expression pnnx_expr_73 4 1 665 651 667 650 668 expr=add(add(@0,mul(@1,@2)),@3) #665=(1,64,512,512)f32 #651=(1)f32 #667=(1,1,512,512)f32 #650=(1,64,1,1)f32 #668=(1,64,512,512)f32
nn.LeakyReLU stylegan_decoder.style_convs.13.activate 1 1 668 669 negative_slope=2.000000e-01 #668=(1,64,512,512)f32 #669=(1,64,512,512)f32
pnnx.Attribute stylegan_decoder.to_rgbs.6 0 1 670 @bias=(1,3,1,1)f32 #670=(1,3,1,1)f32
pnnx.Attribute stylegan_decoder.to_rgbs.6.modulated_conv 0 1 671 @weight=(1,3,64,1,1)f32 #671=(1,3,64,1,1)f32
Tensor.select Tensor.select_103 1 1 233 672 dim=1 index=15 $input=233 #233=(1,16,512)f32 #672=(1,512)f32
nn.Linear stylegan_decoder.to_rgbs.6.modulated_conv.modulation 1 1 672 673 bias=True in_features=512 out_features=64 @bias=(64)f32 @weight=(64,512)f32 #672=(1,512)f32 #673=(1,64)f32
Tensor.view Tensor.view_261 1 1 673 674 shape=(1,1,64,1,1) $input=673 #673=(1,64)f32 #674=(1,1,64,1,1)f32
pnnx.Expression pnnx_expr_48 2 1 671 674 675 expr=mul(@0,@1) #671=(1,3,64,1,1)f32 #674=(1,1,64,1,1)f32 #675=(1,3,64,1,1)f32
Tensor.view Tensor.view_262 1 1 675 676 shape=(3,64,1,1) $input=675 #675=(1,3,64,1,1)f32 #676=(3,64,1,1)f32
F.conv2d F.conv2d_37 2 1 669 676 677 bias=None dilation=(1,1) groups=1 padding=(0,0) stride=(1,1) $input=669 $weight=676 #669=(1,64,512,512)f32 #676=(3,64,1,1)f32 #677=(1,3,512,512)f32
F.upsample F.upsample_79 1 1 620 678 align_corners=False mode=bilinear scale_factor=(2.000000e+00,2.000000e+00) $input=620 #620=(1,3,256,256)f32 #678=(1,3,512,512)f32
pnnx.Expression pnnx_expr_1 3 1 677 670 678 679 expr=add(add(@0,@1),@2) #677=(1,3,512,512)f32 #670=(1,3,1,1)f32 #678=(1,3,512,512)f32 #679=(1,3,512,512)f32
pnnx.Expression pnnx_expr_0 7 1 100 122 144 166 188 210 232 680 expr=[@0,@1,@2,@3,@4,@5,@6] #100=(1,3,8,8)f32 #122=(1,3,16,16)f32 #144=(1,3,32,32)f32 #166=(1,3,64,64)f32 #188=(1,3,128,128)f32 #210=(1,3,256,256)f32 #232=(1,3,512,512)f32
prim::TupleConstruct pnnx_3135 2 1 679 680 681 #679=(1,3,512,512)f32
pnnx.Output pnnx_output_0 1 0 681
**

转后的pnnx,py 文件为:
**
import os
import numpy as np
import tempfile, zipfile
import torch
import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
def init(self):
super(Model, self).init()

    self.conv_body_first = nn.Conv2d(bias=True, dilation=(1,1), groups=1, in_channels=3, kernel_size=(1,1), out_channels=32, padding=(0,0), padding_mode='zeros', stride=(1,1))
    self.conv_body_down_0_conv1 = nn.Conv2d(bias=True, dilation=(1,1), groups=1, in_channels=32, kernel_size=(3,3), out_channels=32, padding=(1,1), padding_mode='zeros', stride=(1,1))
    self.conv_body_down_0_conv2 = nn.Conv2d(bias=True, dilation=(1,1), groups=1, in_channels=32, kernel_size=(3,3), out_channels=64, padding=(1,1), padding_mode='zeros', stride=(1,1))
    self.conv_body_down_0_skip = nn.Conv2d(bias=False, dilation=(1,1), groups=1, in_channels=32, kernel_size=(1,1), out_channels=64, padding=(0,0), padding_mode='zeros', stride=(1,1))
    self.conv_body_down_1_conv1 = nn.Conv2d(bias=True, dilation=(1,1), groups=1, in_channels=64, kernel_size=(3,3), out_channels=64, padding=(1,1), padding_mode='zeros', stride=(1,1))
    self.conv_body_down_1_conv2 = nn.Conv2d(bias=True, dilation=(1,1), groups=1, in_channels=64, kernel_size=(3,3), out_channels=128, padding=(1,1), padding_mode='zeros', stride=(1,1))
    self.conv_body_down_1_skip = nn.Conv2d(bias=False, dilation=(1,1), groups=1, in_channels=64, kernel_size=(1,1), out_channels=128, padding=(0,0), padding_mode='zeros', stride=(1,1))
    self.conv_body_down_2_conv1 = nn.Conv2d(bias=True, dilation=(1,1), groups=1, in_channels=128, kernel_size=(3,3), out_channels=128, padding=(1,1), padding_mode='zeros', stride=(1,1))
    self.conv_body_down_2_conv2 = nn.Conv2d(bias=True, dilation=(1,1), groups=1, in_channels=128, kernel_size=(3,3), out_channels=256, padding=(1,1), padding_mode='zeros', stride=(1,1))
    self.conv_body_down_2_skip = nn.Conv2d(bias=False, dilation=(1,1), groups=1, in_channels=128, kernel_size=(1,1), out_channels=256, padding=(0,0), padding_mode='zeros', stride=(1,1))
    self.conv_body_down_3_conv1 = nn.Conv2d(bias=True, dilation=(1,1), groups=1, in_channels=256, kernel_size=(3,3), out_channels=256, padding=(1,1), padding_mode='zeros', stride=(1,1))
    self.conv_body_down_3_conv2 = nn.Conv2d(bias=True, dilation=(1,1), groups=1, in_channels=256, kernel_size=(3,3), out_channels=256, padding=(1,1), padding_mode='zeros', stride=(1,1))
    self.conv_body_down_3_skip = nn.Conv2d(bias=False, dilation=(1,1), groups=1, in_channels=256, kernel_size=(1,1), out_channels=256, padding=(0,0), padding_mode='zeros', stride=(1,1))
    self.conv_body_down_4_conv1 = nn.Conv2d(bias=True, dilation=(1,1), groups=1, in_channels=256, kernel_size=(3,3), out_channels=256, padding=(1,1), padding_mode='zeros', stride=(1,1))
    self.conv_body_down_4_conv2 = nn.Conv2d(bias=True, dilation=(1,1), groups=1, in_channels=256, kernel_size=(3,3), out_channels=256, padding=(1,1), padding_mode='zeros', stride=(1,1))
    self.conv_body_down_4_skip = nn.Conv2d(bias=False, dilation=(1,1), groups=1, in_channels=256, kernel_size=(1,1), out_channels=256, padding=(0,0), padding_mode='zeros', stride=(1,1))
    self.conv_body_down_5_conv1 = nn.Conv2d(bias=True, dilation=(1,1), groups=1, in_channels=256, kernel_size=(3,3), out_channels=256, padding=(1,1), padding_mode='zeros', stride=(1,1))
    self.conv_body_down_5_conv2 = nn.Conv2d(bias=True, dilation=(1,1), groups=1, in_channels=256, kernel_size=(3,3), out_channels=256, padding=(1,1), padding_mode='zeros', stride=(1,1))
    self.conv_body_down_5_skip = nn.Conv2d(bias=False, dilation=(1,1), groups=1, in_channels=256, kernel_size=(1,1), out_channels=256, padding=(0,0), padding_mode='zeros', stride=(1,1))
    self.conv_body_down_6_conv1 = nn.Conv2d(bias=True, dilation=(1,1), groups=1, in_channels=256, kernel_size=(3,3), out_channels=256, padding=(1,1), padding_mode='zeros', stride=(1,1))
    self.conv_body_down_6_conv2 = nn.Conv2d(bias=True, dilation=(1,1), groups=1, in_channels=256, kernel_size=(3,3), out_channels=256, padding=(1,1), padding_mode='zeros', stride=(1,1))
    self.conv_body_down_6_skip = nn.Conv2d(bias=False, dilation=(1,1), groups=1, in_channels=256, kernel_size=(1,1), out_channels=256, padding=(0,0), padding_mode='zeros', stride=(1,1))
    self.final_conv = nn.Conv2d(bias=True, dilation=(1,1), groups=1, in_channels=256, kernel_size=(3,3), out_channels=256, padding=(1,1), padding_mode='zeros', stride=(1,1))
    self.final_linear = nn.Linear(bias=True, in_features=4096, out_features=8192)
    self.conv_body_up_0_conv1 = nn.Conv2d(bias=True, dilation=(1,1), groups=1, in_channels=256, kernel_size=(3,3), out_channels=256, padding=(1,1), padding_mode='zeros', stride=(1,1))
    self.conv_body_up_0_conv2 = nn.Conv2d(bias=True, dilation=(1,1), groups=1, in_channels=256, kernel_size=(3,3), out_channels=256, padding=(1,1), padding_mode='zeros', stride=(1,1))
    self.conv_body_up_0_skip = nn.Conv2d(bias=False, dilation=(1,1), groups=1, in_channels=256, kernel_size=(1,1), out_channels=256, padding=(0,0), padding_mode='zeros', stride=(1,1))
    self.condition_scale_0_0 = nn.Conv2d(bias=True, dilation=(1,1), groups=1, in_channels=256, kernel_size=(3,3), out_channels=256, padding=(1,1), padding_mode='zeros', stride=(1,1))
    self.condition_scale_0_1 = nn.LeakyReLU(negative_slope=0.200000)
    self.condition_scale_0_2 = nn.Conv2d(bias=True, dilation=(1,1), groups=1, in_channels=256, kernel_size=(3,3), out_channels=256, padding=(1,1), padding_mode='zeros', stride=(1,1))
    self.condition_shift_0_0 = nn.Conv2d(bias=True, dilation=(1,1), groups=1, in_channels=256, kernel_size=(3,3), out_channels=256, padding=(1,1), padding_mode='zeros', stride=(1,1))
    self.condition_shift_0_1 = nn.LeakyReLU(negative_slope=0.200000)
    self.condition_shift_0_2 = nn.Conv2d(bias=True, dilation=(1,1), groups=1, in_channels=256, kernel_size=(3,3), out_channels=256, padding=(1,1), padding_mode='zeros', stride=(1,1))
    self.toRGB_0 = nn.Conv2d(bias=True, dilation=(1,1), groups=1, in_channels=256, kernel_size=(1,1), out_channels=3, padding=(0,0), padding_mode='zeros', stride=(1,1))
    self.conv_body_up_1_conv1 = nn.Conv2d(bias=True, dilation=(1,1), groups=1, in_channels=256, kernel_size=(3,3), out_channels=256, padding=(1,1), padding_mode='zeros', stride=(1,1))
    self.conv_body_up_1_conv2 = nn.Conv2d(bias=True, dilation=(1,1), groups=1, in_channels=256, kernel_size=(3,3), out_channels=256, padding=(1,1), padding_mode='zeros', stride=(1,1))
    self.conv_body_up_1_skip = nn.Conv2d(bias=False, dilation=(1,1), groups=1, in_channels=256, kernel_size=(1,1), out_channels=256, padding=(0,0), padding_mode='zeros', stride=(1,1))
    self.condition_scale_1_0 = nn.Conv2d(bias=True, dilation=(1,1), groups=1, in_channels=256, kernel_size=(3,3), out_channels=256, padding=(1,1), padding_mode='zeros', stride=(1,1))
    self.condition_scale_1_1 = nn.LeakyReLU(negative_slope=0.200000)
    self.condition_scale_1_2 = nn.Conv2d(bias=True, dilation=(1,1), groups=1, in_channels=256, kernel_size=(3,3), out_channels=256, padding=(1,1), padding_mode='zeros', stride=(1,1))
    self.condition_shift_1_0 = nn.Conv2d(bias=True, dilation=(1,1), groups=1, in_channels=256, kernel_size=(3,3), out_channels=256, padding=(1,1), padding_mode='zeros', stride=(1,1))
    self.condition_shift_1_1 = nn.LeakyReLU(negative_slope=0.200000)
    self.condition_shift_1_2 = nn.Conv2d(bias=True, dilation=(1,1), groups=1, in_channels=256, kernel_size=(3,3), out_channels=256, padding=(1,1), padding_mode='zeros', stride=(1,1))
    self.toRGB_1 = nn.Conv2d(bias=True, dilation=(1,1), groups=1, in_channels=256, kernel_size=(1,1), out_channels=3, padding=(0,0), padding_mode='zeros', stride=(1,1))
    self.conv_body_up_2_conv1 = nn.Conv2d(bias=True, dilation=(1,1), groups=1, in_channels=256, kernel_size=(3,3), out_channels=256, padding=(1,1), padding_mode='zeros', stride=(1,1))
    self.conv_body_up_2_conv2 = nn.Conv2d(bias=True, dilation=(1,1), groups=1, in_channels=256, kernel_size=(3,3), out_channels=256, padding=(1,1), padding_mode='zeros', stride=(1,1))
    self.conv_body_up_2_skip = nn.Conv2d(bias=False, dilation=(1,1), groups=1, in_channels=256, kernel_size=(1,1), out_channels=256, padding=(0,0), padding_mode='zeros', stride=(1,1))
    self.condition_scale_2_0 = nn.Conv2d(bias=True, dilation=(1,1), groups=1, in_channels=256, kernel_size=(3,3), out_channels=256, padding=(1,1), padding_mode='zeros', stride=(1,1))
    self.condition_scale_2_1 = nn.LeakyReLU(negative_slope=0.200000)
    self.condition_scale_2_2 = nn.Conv2d(bias=True, dilation=(1,1), groups=1, in_channels=256, kernel_size=(3,3), out_channels=256, padding=(1,1), padding_mode='zeros', stride=(1,1))
    self.condition_shift_2_0 = nn.Conv2d(bias=True, dilation=(1,1), groups=1, in_channels=256, kernel_size=(3,3), out_channels=256, padding=(1,1), padding_mode='zeros', stride=(1,1))
    self.condition_shift_2_1 = nn.LeakyReLU(negative_slope=0.200000)
    self.condition_shift_2_2 = nn.Conv2d(bias=True, dilation=(1,1), groups=1, in_channels=256, kernel_size=(3,3), out_channels=256, padding=(1,1), padding_mode='zeros', stride=(1,1))
    self.toRGB_2 = nn.Conv2d(bias=True, dilation=(1,1), groups=1, in_channels=256, kernel_size=(1,1), out_channels=3, padding=(0,0), padding_mode='zeros', stride=(1,1))
    self.conv_body_up_3_conv1 = nn.Conv2d(bias=True, dilation=(1,1), groups=1, in_channels=256, kernel_size=(3,3), out_channels=256, padding=(1,1), padding_mode='zeros', stride=(1,1))
    self.conv_body_up_3_conv2 = nn.Conv2d(bias=True, dilation=(1,1), groups=1, in_channels=256, kernel_size=(3,3), out_channels=256, padding=(1,1), padding_mode='zeros', stride=(1,1))
    self.conv_body_up_3_skip = nn.Conv2d(bias=False, dilation=(1,1), groups=1, in_channels=256, kernel_size=(1,1), out_channels=256, padding=(0,0), padding_mode='zeros', stride=(1,1))
    self.condition_scale_3_0 = nn.Conv2d(bias=True, dilation=(1,1), groups=1, in_channels=256, kernel_size=(3,3), out_channels=256, padding=(1,1), padding_mode='zeros', stride=(1,1))
    self.condition_scale_3_1 = nn.LeakyReLU(negative_slope=0.200000)
    self.condition_scale_3_2 = nn.Conv2d(bias=True, dilation=(1,1), groups=1, in_channels=256, kernel_size=(3,3), out_channels=256, padding=(1,1), padding_mode='zeros', stride=(1,1))
    self.condition_shift_3_0 = nn.Conv2d(bias=True, dilation=(1,1), groups=1, in_channels=256, kernel_size=(3,3), out_channels=256, padding=(1,1), padding_mode='zeros', stride=(1,1))
    self.condition_shift_3_1 = nn.LeakyReLU(negative_slope=0.200000)
    self.condition_shift_3_2 = nn.Conv2d(bias=True, dilation=(1,1), groups=1, in_channels=256, kernel_size=(3,3), out_channels=256, padding=(1,1), padding_mode='zeros', stride=(1,1))
    self.toRGB_3 = nn.Conv2d(bias=True, dilation=(1,1), groups=1, in_channels=256, kernel_size=(1,1), out_channels=3, padding=(0,0), padding_mode='zeros', stride=(1,1))
    self.conv_body_up_4_conv1 = nn.Conv2d(bias=True, dilation=(1,1), groups=1, in_channels=256, kernel_size=(3,3), out_channels=256, padding=(1,1), padding_mode='zeros', stride=(1,1))
    self.conv_body_up_4_conv2 = nn.Conv2d(bias=True, dilation=(1,1), groups=1, in_channels=256, kernel_size=(3,3), out_channels=128, padding=(1,1), padding_mode='zeros', stride=(1,1))
    self.conv_body_up_4_skip = nn.Conv2d(bias=False, dilation=(1,1), groups=1, in_channels=256, kernel_size=(1,1), out_channels=128, padding=(0,0), padding_mode='zeros', stride=(1,1))
    self.condition_scale_4_0 = nn.Conv2d(bias=True, dilation=(1,1), groups=1, in_channels=128, kernel_size=(3,3), out_channels=128, padding=(1,1), padding_mode='zeros', stride=(1,1))
    self.condition_scale_4_1 = nn.LeakyReLU(negative_slope=0.200000)
    self.condition_scale_4_2 = nn.Conv2d(bias=True, dilation=(1,1), groups=1, in_channels=128, kernel_size=(3,3), out_channels=128, padding=(1,1), padding_mode='zeros', stride=(1,1))
    self.condition_shift_4_0 = nn.Conv2d(bias=True, dilation=(1,1), groups=1, in_channels=128, kernel_size=(3,3), out_channels=128, padding=(1,1), padding_mode='zeros', stride=(1,1))
    self.condition_shift_4_1 = nn.LeakyReLU(negative_slope=0.200000)
    self.condition_shift_4_2 = nn.Conv2d(bias=True, dilation=(1,1), groups=1, in_channels=128, kernel_size=(3,3), out_channels=128, padding=(1,1), padding_mode='zeros', stride=(1,1))
    self.toRGB_4 = nn.Conv2d(bias=True, dilation=(1,1), groups=1, in_channels=128, kernel_size=(1,1), out_channels=3, padding=(0,0), padding_mode='zeros', stride=(1,1))
    self.conv_body_up_5_conv1 = nn.Conv2d(bias=True, dilation=(1,1), groups=1, in_channels=128, kernel_size=(3,3), out_channels=128, padding=(1,1), padding_mode='zeros', stride=(1,1))
    self.conv_body_up_5_conv2 = nn.Conv2d(bias=True, dilation=(1,1), groups=1, in_channels=128, kernel_size=(3,3), out_channels=64, padding=(1,1), padding_mode='zeros', stride=(1,1))
    self.conv_body_up_5_skip = nn.Conv2d(bias=False, dilation=(1,1), groups=1, in_channels=128, kernel_size=(1,1), out_channels=64, padding=(0,0), padding_mode='zeros', stride=(1,1))
    self.condition_scale_5_0 = nn.Conv2d(bias=True, dilation=(1,1), groups=1, in_channels=64, kernel_size=(3,3), out_channels=64, padding=(1,1), padding_mode='zeros', stride=(1,1))
    self.condition_scale_5_1 = nn.LeakyReLU(negative_slope=0.200000)
    self.condition_scale_5_2 = nn.Conv2d(bias=True, dilation=(1,1), groups=1, in_channels=64, kernel_size=(3,3), out_channels=64, padding=(1,1), padding_mode='zeros', stride=(1,1))
    self.condition_shift_5_0 = nn.Conv2d(bias=True, dilation=(1,1), groups=1, in_channels=64, kernel_size=(3,3), out_channels=64, padding=(1,1), padding_mode='zeros', stride=(1,1))
    self.condition_shift_5_1 = nn.LeakyReLU(negative_slope=0.200000)
    self.condition_shift_5_2 = nn.Conv2d(bias=True, dilation=(1,1), groups=1, in_channels=64, kernel_size=(3,3), out_channels=64, padding=(1,1), padding_mode='zeros', stride=(1,1))
    self.toRGB_5 = nn.Conv2d(bias=True, dilation=(1,1), groups=1, in_channels=64, kernel_size=(1,1), out_channels=3, padding=(0,0), padding_mode='zeros', stride=(1,1))
    self.conv_body_up_6_conv1 = nn.Conv2d(bias=True, dilation=(1,1), groups=1, in_channels=64, kernel_size=(3,3), out_channels=64, padding=(1,1), padding_mode='zeros', stride=(1,1))
    self.conv_body_up_6_conv2 = nn.Conv2d(bias=True, dilation=(1,1), groups=1, in_channels=64, kernel_size=(3,3), out_channels=32, padding=(1,1), padding_mode='zeros', stride=(1,1))
    self.conv_body_up_6_skip = nn.Conv2d(bias=False, dilation=(1,1), groups=1, in_channels=64, kernel_size=(1,1), out_channels=32, padding=(0,0), padding_mode='zeros', stride=(1,1))
    self.condition_scale_6_0 = nn.Conv2d(bias=True, dilation=(1,1), groups=1, in_channels=32, kernel_size=(3,3), out_channels=32, padding=(1,1), padding_mode='zeros', stride=(1,1))
    self.condition_scale_6_1 = nn.LeakyReLU(negative_slope=0.200000)
    self.condition_scale_6_2 = nn.Conv2d(bias=True, dilation=(1,1), groups=1, in_channels=32, kernel_size=(3,3), out_channels=32, padding=(1,1), padding_mode='zeros', stride=(1,1))
    self.condition_shift_6_0 = nn.Conv2d(bias=True, dilation=(1,1), groups=1, in_channels=32, kernel_size=(3,3), out_channels=32, padding=(1,1), padding_mode='zeros', stride=(1,1))
    self.condition_shift_6_1 = nn.LeakyReLU(negative_slope=0.200000)
    self.condition_shift_6_2 = nn.Conv2d(bias=True, dilation=(1,1), groups=1, in_channels=32, kernel_size=(3,3), out_channels=32, padding=(1,1), padding_mode='zeros', stride=(1,1))
    self.toRGB_6 = nn.Conv2d(bias=True, dilation=(1,1), groups=1, in_channels=32, kernel_size=(1,1), out_channels=3, padding=(0,0), padding_mode='zeros', stride=(1,1))
    self.stylegan_decoder_style_conv1_modulated_conv_modulation = nn.Linear(bias=True, in_features=512, out_features=512)
    self.stylegan_decoder_style_conv1_activate = nn.LeakyReLU(negative_slope=0.200000)
    self.stylegan_decoder_to_rgb1_modulated_conv_modulation = nn.Linear(bias=True, in_features=512, out_features=512)
    self.stylegan_decoder_style_convs_0_modulated_conv_modulation = nn.Linear(bias=True, in_features=512, out_features=512)
    self.stylegan_decoder_style_convs_0_activate = nn.LeakyReLU(negative_slope=0.200000)
    self.stylegan_decoder_style_convs_1_modulated_conv_modulation = nn.Linear(bias=True, in_features=512, out_features=512)
    self.stylegan_decoder_style_convs_1_activate = nn.LeakyReLU(negative_slope=0.200000)
    self.stylegan_decoder_to_rgbs_0_modulated_conv_modulation = nn.Linear(bias=True, in_features=512, out_features=512)
    self.stylegan_decoder_style_convs_2_modulated_conv_modulation = nn.Linear(bias=True, in_features=512, out_features=512)
    self.stylegan_decoder_style_convs_2_activate = nn.LeakyReLU(negative_slope=0.200000)
    self.stylegan_decoder_style_convs_3_modulated_conv_modulation = nn.Linear(bias=True, in_features=512, out_features=512)
    self.stylegan_decoder_style_convs_3_activate = nn.LeakyReLU(negative_slope=0.200000)
    self.stylegan_decoder_to_rgbs_1_modulated_conv_modulation = nn.Linear(bias=True, in_features=512, out_features=512)
    self.stylegan_decoder_style_convs_4_modulated_conv_modulation = nn.Linear(bias=True, in_features=512, out_features=512)
    self.stylegan_decoder_style_convs_4_activate = nn.LeakyReLU(negative_slope=0.200000)
    self.stylegan_decoder_style_convs_5_modulated_conv_modulation = nn.Linear(bias=True, in_features=512, out_features=512)
    self.stylegan_decoder_style_convs_5_activate = nn.LeakyReLU(negative_slope=0.200000)
    self.stylegan_decoder_to_rgbs_2_modulated_conv_modulation = nn.Linear(bias=True, in_features=512, out_features=512)
    self.stylegan_decoder_style_convs_6_modulated_conv_modulation = nn.Linear(bias=True, in_features=512, out_features=512)
    self.stylegan_decoder_style_convs_6_activate = nn.LeakyReLU(negative_slope=0.200000)
    self.stylegan_decoder_style_convs_7_modulated_conv_modulation = nn.Linear(bias=True, in_features=512, out_features=512)
    self.stylegan_decoder_style_convs_7_activate = nn.LeakyReLU(negative_slope=0.200000)
    self.stylegan_decoder_to_rgbs_3_modulated_conv_modulation = nn.Linear(bias=True, in_features=512, out_features=512)
    self.stylegan_decoder_style_convs_8_modulated_conv_modulation = nn.Linear(bias=True, in_features=512, out_features=512)
    self.stylegan_decoder_style_convs_8_activate = nn.LeakyReLU(negative_slope=0.200000)
    self.stylegan_decoder_style_convs_9_modulated_conv_modulation = nn.Linear(bias=True, in_features=512, out_features=256)
    self.stylegan_decoder_style_convs_9_activate = nn.LeakyReLU(negative_slope=0.200000)
    self.stylegan_decoder_to_rgbs_4_modulated_conv_modulation = nn.Linear(bias=True, in_features=512, out_features=256)
    self.stylegan_decoder_style_convs_10_modulated_conv_modulation = nn.Linear(bias=True, in_features=512, out_features=256)
    self.stylegan_decoder_style_convs_10_activate = nn.LeakyReLU(negative_slope=0.200000)
    self.stylegan_decoder_style_convs_11_modulated_conv_modulation = nn.Linear(bias=True, in_features=512, out_features=128)
    self.stylegan_decoder_style_convs_11_activate = nn.LeakyReLU(negative_slope=0.200000)
    self.stylegan_decoder_to_rgbs_5_modulated_conv_modulation = nn.Linear(bias=True, in_features=512, out_features=128)
    self.stylegan_decoder_style_convs_12_modulated_conv_modulation = nn.Linear(bias=True, in_features=512, out_features=128)
    self.stylegan_decoder_style_convs_12_activate = nn.LeakyReLU(negative_slope=0.200000)
    self.stylegan_decoder_style_convs_13_modulated_conv_modulation = nn.Linear(bias=True, in_features=512, out_features=64)
    self.stylegan_decoder_style_convs_13_activate = nn.LeakyReLU(negative_slope=0.200000)
    self.stylegan_decoder_to_rgbs_6_modulated_conv_modulation = nn.Linear(bias=True, in_features=512, out_features=64)

    archive = zipfile.ZipFile('E://opt//ncnn//tools//pnnx//build_debug//install//bin//GFPGANCleanv1-NoCE-C2.pnnx.bin', 'r')
    self.conv_body_first.bias = self.load_pnnx_bin_as_parameter(archive, 'conv_body_first.bias', (32), 'float32')
    self.conv_body_first.weight = self.load_pnnx_bin_as_parameter(archive, 'conv_body_first.weight', (32,3,1,1), 'float32')
    self.conv_body_down_0_conv1.bias = self.load_pnnx_bin_as_parameter(archive, 'conv_body_down.0.conv1.bias', (32), 'float32')
    self.conv_body_down_0_conv1.weight = self.load_pnnx_bin_as_parameter(archive, 'conv_body_down.0.conv1.weight', (32,32,3,3), 'float32')
    self.conv_body_down_0_conv2.bias = self.load_pnnx_bin_as_parameter(archive, 'conv_body_down.0.conv2.bias', (64), 'float32')
    self.conv_body_down_0_conv2.weight = self.load_pnnx_bin_as_parameter(archive, 'conv_body_down.0.conv2.weight', (64,32,3,3), 'float32')
    self.conv_body_down_0_skip.weight = self.load_pnnx_bin_as_parameter(archive, 'conv_body_down.0.skip.weight', (64,32,1,1), 'float32')
    self.conv_body_down_1_conv1.bias = self.load_pnnx_bin_as_parameter(archive, 'conv_body_down.1.conv1.bias', (64), 'float32')
    self.conv_body_down_1_conv1.weight = self.load_pnnx_bin_as_parameter(archive, 'conv_body_down.1.conv1.weight', (64,64,3,3), 'float32')
    self.conv_body_down_1_conv2.bias = self.load_pnnx_bin_as_parameter(archive, 'conv_body_down.1.conv2.bias', (128), 'float32')
    self.conv_body_down_1_conv2.weight = self.load_pnnx_bin_as_parameter(archive, 'conv_body_down.1.conv2.weight', (128,64,3,3), 'float32')
    self.conv_body_down_1_skip.weight = self.load_pnnx_bin_as_parameter(archive, 'conv_body_down.1.skip.weight', (128,64,1,1), 'float32')
    self.conv_body_down_2_conv1.bias = self.load_pnnx_bin_as_parameter(archive, 'conv_body_down.2.conv1.bias', (128), 'float32')
    self.conv_body_down_2_conv1.weight = self.load_pnnx_bin_as_parameter(archive, 'conv_body_down.2.conv1.weight', (128,128,3,3), 'float32')
    self.conv_body_down_2_conv2.bias = self.load_pnnx_bin_as_parameter(archive, 'conv_body_down.2.conv2.bias', (256), 'float32')
    self.conv_body_down_2_conv2.weight = self.load_pnnx_bin_as_parameter(archive, 'conv_body_down.2.conv2.weight', (256,128,3,3), 'float32')
    self.conv_body_down_2_skip.weight = self.load_pnnx_bin_as_parameter(archive, 'conv_body_down.2.skip.weight', (256,128,1,1), 'float32')
    self.conv_body_down_3_conv1.bias = self.load_pnnx_bin_as_parameter(archive, 'conv_body_down.3.conv1.bias', (256), 'float32')
    self.conv_body_down_3_conv1.weight = self.load_pnnx_bin_as_parameter(archive, 'conv_body_down.3.conv1.weight', (256,256,3,3), 'float32')
    self.conv_body_down_3_conv2.bias = self.load_pnnx_bin_as_parameter(archive, 'conv_body_down.3.conv2.bias', (256), 'float32')
    self.conv_body_down_3_conv2.weight = self.load_pnnx_bin_as_parameter(archive, 'conv_body_down.3.conv2.weight', (256,256,3,3), 'float32')
    self.conv_body_down_3_skip.weight = self.load_pnnx_bin_as_parameter(archive, 'conv_body_down.3.skip.weight', (256,256,1,1), 'float32')
    self.conv_body_down_4_conv1.bias = self.load_pnnx_bin_as_parameter(archive, 'conv_body_down.4.conv1.bias', (256), 'float32')
    self.conv_body_down_4_conv1.weight = self.load_pnnx_bin_as_parameter(archive, 'conv_body_down.4.conv1.weight', (256,256,3,3), 'float32')
    self.conv_body_down_4_conv2.bias = self.load_pnnx_bin_as_parameter(archive, 'conv_body_down.4.conv2.bias', (256), 'float32')
    self.conv_body_down_4_conv2.weight = self.load_pnnx_bin_as_parameter(archive, 'conv_body_down.4.conv2.weight', (256,256,3,3), 'float32')
    self.conv_body_down_4_skip.weight = self.load_pnnx_bin_as_parameter(archive, 'conv_body_down.4.skip.weight', (256,256,1,1), 'float32')
    self.conv_body_down_5_conv1.bias = self.load_pnnx_bin_as_parameter(archive, 'conv_body_down.5.conv1.bias', (256), 'float32')
    self.conv_body_down_5_conv1.weight = self.load_pnnx_bin_as_parameter(archive, 'conv_body_down.5.conv1.weight', (256,256,3,3), 'float32')
    self.conv_body_down_5_conv2.bias = self.load_pnnx_bin_as_parameter(archive, 'conv_body_down.5.conv2.bias', (256), 'float32')
    self.conv_body_down_5_conv2.weight = self.load_pnnx_bin_as_parameter(archive, 'conv_body_down.5.conv2.weight', (256,256,3,3), 'float32')
    self.conv_body_down_5_skip.weight = self.load_pnnx_bin_as_parameter(archive, 'conv_body_down.5.skip.weight', (256,256,1,1), 'float32')
    self.conv_body_down_6_conv1.bias = self.load_pnnx_bin_as_parameter(archive, 'conv_body_down.6.conv1.bias', (256), 'float32')
    self.conv_body_down_6_conv1.weight = self.load_pnnx_bin_as_parameter(archive, 'conv_body_down.6.conv1.weight', (256,256,3,3), 'float32')
    self.conv_body_down_6_conv2.bias = self.load_pnnx_bin_as_parameter(archive, 'conv_body_down.6.conv2.bias', (256), 'float32')
    self.conv_body_down_6_conv2.weight = self.load_pnnx_bin_as_parameter(archive, 'conv_body_down.6.conv2.weight', (256,256,3,3), 'float32')
    self.conv_body_down_6_skip.weight = self.load_pnnx_bin_as_parameter(archive, 'conv_body_down.6.skip.weight', (256,256,1,1), 'float32')
    self.final_conv.bias = self.load_pnnx_bin_as_parameter(archive, 'final_conv.bias', (256), 'float32')
    self.final_conv.weight = self.load_pnnx_bin_as_parameter(archive, 'final_conv.weight', (256,256,3,3), 'float32')
    self.final_linear.bias = self.load_pnnx_bin_as_parameter(archive, 'final_linear.bias', (8192), 'float32')
    self.final_linear.weight = self.load_pnnx_bin_as_parameter(archive, 'final_linear.weight', (8192,4096), 'float32')
    self.conv_body_up_0_conv1.bias = self.load_pnnx_bin_as_parameter(archive, 'conv_body_up.0.conv1.bias', (256), 'float32')
    self.conv_body_up_0_conv1.weight = self.load_pnnx_bin_as_parameter(archive, 'conv_body_up.0.conv1.weight', (256,256,3,3), 'float32')
    self.conv_body_up_0_conv2.bias = self.load_pnnx_bin_as_parameter(archive, 'conv_body_up.0.conv2.bias', (256), 'float32')
    self.conv_body_up_0_conv2.weight = self.load_pnnx_bin_as_parameter(archive, 'conv_body_up.0.conv2.weight', (256,256,3,3), 'float32')
    self.conv_body_up_0_skip.weight = self.load_pnnx_bin_as_parameter(archive, 'conv_body_up.0.skip.weight', (256,256,1,1), 'float32')
    self.condition_scale_0_0.bias = self.load_pnnx_bin_as_parameter(archive, 'condition_scale.0.0.bias', (256), 'float32')
    self.condition_scale_0_0.weight = self.load_pnnx_bin_as_parameter(archive, 'condition_scale.0.0.weight', (256,256,3,3), 'float32')
    self.condition_scale_0_2.bias = self.load_pnnx_bin_as_parameter(archive, 'condition_scale.0.2.bias', (256), 'float32')
    self.condition_scale_0_2.weight = self.load_pnnx_bin_as_parameter(archive, 'condition_scale.0.2.weight', (256,256,3,3), 'float32')
    self.condition_shift_0_0.bias = self.load_pnnx_bin_as_parameter(archive, 'condition_shift.0.0.bias', (256), 'float32')
    self.condition_shift_0_0.weight = self.load_pnnx_bin_as_parameter(archive, 'condition_shift.0.0.weight', (256,256,3,3), 'float32')
    self.condition_shift_0_2.bias = self.load_pnnx_bin_as_parameter(archive, 'condition_shift.0.2.bias', (256), 'float32')
    self.condition_shift_0_2.weight = self.load_pnnx_bin_as_parameter(archive, 'condition_shift.0.2.weight', (256,256,3,3), 'float32')
    self.toRGB_0.bias = self.load_pnnx_bin_as_parameter(archive, 'toRGB.0.bias', (3), 'float32')
    self.toRGB_0.weight = self.load_pnnx_bin_as_parameter(archive, 'toRGB.0.weight', (3,256,1,1), 'float32')
    self.conv_body_up_1_conv1.bias = self.load_pnnx_bin_as_parameter(archive, 'conv_body_up.1.conv1.bias', (256), 'float32')
    self.conv_body_up_1_conv1.weight = self.load_pnnx_bin_as_parameter(archive, 'conv_body_up.1.conv1.weight', (256,256,3,3), 'float32')
    self.conv_body_up_1_conv2.bias = self.load_pnnx_bin_as_parameter(archive, 'conv_body_up.1.conv2.bias', (256), 'float32')
    self.conv_body_up_1_conv2.weight = self.load_pnnx_bin_as_parameter(archive, 'conv_body_up.1.conv2.weight', (256,256,3,3), 'float32')
    self.conv_body_up_1_skip.weight = self.load_pnnx_bin_as_parameter(archive, 'conv_body_up.1.skip.weight', (256,256,1,1), 'float32')
    self.condition_scale_1_0.bias = self.load_pnnx_bin_as_parameter(archive, 'condition_scale.1.0.bias', (256), 'float32')
    self.condition_scale_1_0.weight = self.load_pnnx_bin_as_parameter(archive, 'condition_scale.1.0.weight', (256,256,3,3), 'float32')
    self.condition_scale_1_2.bias = self.load_pnnx_bin_as_parameter(archive, 'condition_scale.1.2.bias', (256), 'float32')
    self.condition_scale_1_2.weight = self.load_pnnx_bin_as_parameter(archive, 'condition_scale.1.2.weight', (256,256,3,3), 'float32')
    self.condition_shift_1_0.bias = self.load_pnnx_bin_as_parameter(archive, 'condition_shift.1.0.bias', (256), 'float32')
    self.condition_shift_1_0.weight = self.load_pnnx_bin_as_parameter(archive, 'condition_shift.1.0.weight', (256,256,3,3), 'float32')
    self.condition_shift_1_2.bias = self.load_pnnx_bin_as_parameter(archive, 'condition_shift.1.2.bias', (256), 'float32')
    self.condition_shift_1_2.weight = self.load_pnnx_bin_as_parameter(archive, 'condition_shift.1.2.weight', (256,256,3,3), 'float32')
    self.toRGB_1.bias = self.load_pnnx_bin_as_parameter(archive, 'toRGB.1.bias', (3), 'float32')
    self.toRGB_1.weight = self.load_pnnx_bin_as_parameter(archive, 'toRGB.1.weight', (3,256,1,1), 'float32')
    self.conv_body_up_2_conv1.bias = self.load_pnnx_bin_as_parameter(archive, 'conv_body_up.2.conv1.bias', (256), 'float32')
    self.conv_body_up_2_conv1.weight = self.load_pnnx_bin_as_parameter(archive, 'conv_body_up.2.conv1.weight', (256,256,3,3), 'float32')
    self.conv_body_up_2_conv2.bias = self.load_pnnx_bin_as_parameter(archive, 'conv_body_up.2.conv2.bias', (256), 'float32')
    self.conv_body_up_2_conv2.weight = self.load_pnnx_bin_as_parameter(archive, 'conv_body_up.2.conv2.weight', (256,256,3,3), 'float32')
    self.conv_body_up_2_skip.weight = self.load_pnnx_bin_as_parameter(archive, 'conv_body_up.2.skip.weight', (256,256,1,1), 'float32')
    self.condition_scale_2_0.bias = self.load_pnnx_bin_as_parameter(archive, 'condition_scale.2.0.bias', (256), 'float32')
    self.condition_scale_2_0.weight = self.load_pnnx_bin_as_parameter(archive, 'condition_scale.2.0.weight', (256,256,3,3), 'float32')
    self.condition_scale_2_2.bias = self.load_pnnx_bin_as_parameter(archive, 'condition_scale.2.2.bias', (256), 'float32')
    self.condition_scale_2_2.weight = self.load_pnnx_bin_as_parameter(archive, 'condition_scale.2.2.weight', (256,256,3,3), 'float32')
    self.condition_shift_2_0.bias = self.load_pnnx_bin_as_parameter(archive, 'condition_shift.2.0.bias', (256), 'float32')
    self.condition_shift_2_0.weight = self.load_pnnx_bin_as_parameter(archive, 'condition_shift.2.0.weight', (256,256,3,3), 'float32')
    self.condition_shift_2_2.bias = self.load_pnnx_bin_as_parameter(archive, 'condition_shift.2.2.bias', (256), 'float32')
    self.condition_shift_2_2.weight = self.load_pnnx_bin_as_parameter(archive, 'condition_shift.2.2.weight', (256,256,3,3), 'float32')
    self.toRGB_2.bias = self.load_pnnx_bin_as_parameter(archive, 'toRGB.2.bias', (3), 'float32')
    self.toRGB_2.weight = self.load_pnnx_bin_as_parameter(archive, 'toRGB.2.weight', (3,256,1,1), 'float32')
    self.conv_body_up_3_conv1.bias = self.load_pnnx_bin_as_parameter(archive, 'conv_body_up.3.conv1.bias', (256), 'float32')
    self.conv_body_up_3_conv1.weight = self.load_pnnx_bin_as_parameter(archive, 'conv_body_up.3.conv1.weight', (256,256,3,3), 'float32')
    self.conv_body_up_3_conv2.bias = self.load_pnnx_bin_as_parameter(archive, 'conv_body_up.3.conv2.bias', (256), 'float32')
    self.conv_body_up_3_conv2.weight = self.load_pnnx_bin_as_parameter(archive, 'conv_body_up.3.conv2.weight', (256,256,3,3), 'float32')
    self.conv_body_up_3_skip.weight = self.load_pnnx_bin_as_parameter(archive, 'conv_body_up.3.skip.weight', (256,256,1,1), 'float32')
    self.condition_scale_3_0.bias = self.load_pnnx_bin_as_parameter(archive, 'condition_scale.3.0.bias', (256), 'float32')
    self.condition_scale_3_0.weight = self.load_pnnx_bin_as_parameter(archive, 'condition_scale.3.0.weight', (256,256,3,3), 'float32')
    self.condition_scale_3_2.bias = self.load_pnnx_bin_as_parameter(archive, 'condition_scale.3.2.bias', (256), 'float32')
    self.condition_scale_3_2.weight = self.load_pnnx_bin_as_parameter(archive, 'condition_scale.3.2.weight', (256,256,3,3), 'float32')
    self.condition_shift_3_0.bias = self.load_pnnx_bin_as_parameter(archive, 'condition_shift.3.0.bias', (256), 'float32')
    self.condition_shift_3_0.weight = self.load_pnnx_bin_as_parameter(archive, 'condition_shift.3.0.weight', (256,256,3,3), 'float32')
    self.condition_shift_3_2.bias = self.load_pnnx_bin_as_parameter(archive, 'condition_shift.3.2.bias', (256), 'float32')
    self.condition_shift_3_2.weight = self.load_pnnx_bin_as_parameter(archive, 'condition_shift.3.2.weight', (256,256,3,3), 'float32')
    self.toRGB_3.bias = self.load_pnnx_bin_as_parameter(archive, 'toRGB.3.bias', (3), 'float32')
    self.toRGB_3.weight = self.load_pnnx_bin_as_parameter(archive, 'toRGB.3.weight', (3,256,1,1), 'float32')
    self.conv_body_up_4_conv1.bias = self.load_pnnx_bin_as_parameter(archive, 'conv_body_up.4.conv1.bias', (256), 'float32')
    self.conv_body_up_4_conv1.weight = self.load_pnnx_bin_as_parameter(archive, 'conv_body_up.4.conv1.weight', (256,256,3,3), 'float32')
    self.conv_body_up_4_conv2.bias = self.load_pnnx_bin_as_parameter(archive, 'conv_body_up.4.conv2.bias', (128), 'float32')
    self.conv_body_up_4_conv2.weight = self.load_pnnx_bin_as_parameter(archive, 'conv_body_up.4.conv2.weight', (128,256,3,3), 'float32')
    self.conv_body_up_4_skip.weight = self.load_pnnx_bin_as_parameter(archive, 'conv_body_up.4.skip.weight', (128,256,1,1), 'float32')
    self.condition_scale_4_0.bias = self.load_pnnx_bin_as_parameter(archive, 'condition_scale.4.0.bias', (128), 'float32')
    self.condition_scale_4_0.weight = self.load_pnnx_bin_as_parameter(archive, 'condition_scale.4.0.weight', (128,128,3,3), 'float32')
    self.condition_scale_4_2.bias = self.load_pnnx_bin_as_parameter(archive, 'condition_scale.4.2.bias', (128), 'float32')
    self.condition_scale_4_2.weight = self.load_pnnx_bin_as_parameter(archive, 'condition_scale.4.2.weight', (128,128,3,3), 'float32')
    self.condition_shift_4_0.bias = self.load_pnnx_bin_as_parameter(archive, 'condition_shift.4.0.bias', (128), 'float32')
    self.condition_shift_4_0.weight = self.load_pnnx_bin_as_parameter(archive, 'condition_shift.4.0.weight', (128,128,3,3), 'float32')
    self.condition_shift_4_2.bias = self.load_pnnx_bin_as_parameter(archive, 'condition_shift.4.2.bias', (128), 'float32')
    self.condition_shift_4_2.weight = self.load_pnnx_bin_as_parameter(archive, 'condition_shift.4.2.weight', (128,128,3,3), 'float32')
    self.toRGB_4.bias = self.load_pnnx_bin_as_parameter(archive, 'toRGB.4.bias', (3), 'float32')
    self.toRGB_4.weight = self.load_pnnx_bin_as_parameter(archive, 'toRGB.4.weight', (3,128,1,1), 'float32')
    self.conv_body_up_5_conv1.bias = self.load_pnnx_bin_as_parameter(archive, 'conv_body_up.5.conv1.bias', (128), 'float32')
    self.conv_body_up_5_conv1.weight = self.load_pnnx_bin_as_parameter(archive, 'conv_body_up.5.conv1.weight', (128,128,3,3), 'float32')
    self.conv_body_up_5_conv2.bias = self.load_pnnx_bin_as_parameter(archive, 'conv_body_up.5.conv2.bias', (64), 'float32')
    self.conv_body_up_5_conv2.weight = self.load_pnnx_bin_as_parameter(archive, 'conv_body_up.5.conv2.weight', (64,128,3,3), 'float32')
    self.conv_body_up_5_skip.weight = self.load_pnnx_bin_as_parameter(archive, 'conv_body_up.5.skip.weight', (64,128,1,1), 'float32')
    self.condition_scale_5_0.bias = self.load_pnnx_bin_as_parameter(archive, 'condition_scale.5.0.bias', (64), 'float32')
    self.condition_scale_5_0.weight = self.load_pnnx_bin_as_parameter(archive, 'condition_scale.5.0.weight', (64,64,3,3), 'float32')
    self.condition_scale_5_2.bias = self.load_pnnx_bin_as_parameter(archive, 'condition_scale.5.2.bias', (64), 'float32')
    self.condition_scale_5_2.weight = self.load_pnnx_bin_as_parameter(archive, 'condition_scale.5.2.weight', (64,64,3,3), 'float32')
    self.condition_shift_5_0.bias = self.load_pnnx_bin_as_parameter(archive, 'condition_shift.5.0.bias', (64), 'float32')
    self.condition_shift_5_0.weight = self.load_pnnx_bin_as_parameter(archive, 'condition_shift.5.0.weight', (64,64,3,3), 'float32')
    self.condition_shift_5_2.bias = self.load_pnnx_bin_as_parameter(archive, 'condition_shift.5.2.bias', (64), 'float32')
    self.condition_shift_5_2.weight = self.load_pnnx_bin_as_parameter(archive, 'condition_shift.5.2.weight', (64,64,3,3), 'float32')
    self.toRGB_5.bias = self.load_pnnx_bin_as_parameter(archive, 'toRGB.5.bias', (3), 'float32')
    self.toRGB_5.weight = self.load_pnnx_bin_as_parameter(archive, 'toRGB.5.weight', (3,64,1,1), 'float32')
    self.conv_body_up_6_conv1.bias = self.load_pnnx_bin_as_parameter(archive, 'conv_body_up.6.conv1.bias', (64), 'float32')
    self.conv_body_up_6_conv1.weight = self.load_pnnx_bin_as_parameter(archive, 'conv_body_up.6.conv1.weight', (64,64,3,3), 'float32')
    self.conv_body_up_6_conv2.bias = self.load_pnnx_bin_as_parameter(archive, 'conv_body_up.6.conv2.bias', (32), 'float32')
    self.conv_body_up_6_conv2.weight = self.load_pnnx_bin_as_parameter(archive, 'conv_body_up.6.conv2.weight', (32,64,3,3), 'float32')
    self.conv_body_up_6_skip.weight = self.load_pnnx_bin_as_parameter(archive, 'conv_body_up.6.skip.weight', (32,64,1,1), 'float32')
    self.condition_scale_6_0.bias = self.load_pnnx_bin_as_parameter(archive, 'condition_scale.6.0.bias', (32), 'float32')
    self.condition_scale_6_0.weight = self.load_pnnx_bin_as_parameter(archive, 'condition_scale.6.0.weight', (32,32,3,3), 'float32')
    self.condition_scale_6_2.bias = self.load_pnnx_bin_as_parameter(archive, 'condition_scale.6.2.bias', (32), 'float32')
    self.condition_scale_6_2.weight = self.load_pnnx_bin_as_parameter(archive, 'condition_scale.6.2.weight', (32,32,3,3), 'float32')
    self.condition_shift_6_0.bias = self.load_pnnx_bin_as_parameter(archive, 'condition_shift.6.0.bias', (32), 'float32')
    self.condition_shift_6_0.weight = self.load_pnnx_bin_as_parameter(archive, 'condition_shift.6.0.weight', (32,32,3,3), 'float32')
    self.condition_shift_6_2.bias = self.load_pnnx_bin_as_parameter(archive, 'condition_shift.6.2.bias', (32), 'float32')
    self.condition_shift_6_2.weight = self.load_pnnx_bin_as_parameter(archive, 'condition_shift.6.2.weight', (32,32,3,3), 'float32')
    self.toRGB_6.bias = self.load_pnnx_bin_as_parameter(archive, 'toRGB.6.bias', (3), 'float32')
    self.toRGB_6.weight = self.load_pnnx_bin_as_parameter(archive, 'toRGB.6.weight', (3,32,1,1), 'float32')
    self.stylegan_decoder_style_conv1_modulated_conv_modulation.bias = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder.style_conv1.modulated_conv.modulation.bias', (512), 'float32')
    self.stylegan_decoder_style_conv1_modulated_conv_modulation.weight = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder.style_conv1.modulated_conv.modulation.weight', (512,512), 'float32')
    self.stylegan_decoder_to_rgb1_modulated_conv_modulation.bias = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder.to_rgb1.modulated_conv.modulation.bias', (512), 'float32')
    self.stylegan_decoder_to_rgb1_modulated_conv_modulation.weight = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder.to_rgb1.modulated_conv.modulation.weight', (512,512), 'float32')
    self.stylegan_decoder_style_convs_0_modulated_conv_modulation.bias = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder.style_convs.0.modulated_conv.modulation.bias', (512), 'float32')
    self.stylegan_decoder_style_convs_0_modulated_conv_modulation.weight = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder.style_convs.0.modulated_conv.modulation.weight', (512,512), 'float32')
    self.stylegan_decoder_style_convs_1_modulated_conv_modulation.bias = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder.style_convs.1.modulated_conv.modulation.bias', (512), 'float32')
    self.stylegan_decoder_style_convs_1_modulated_conv_modulation.weight = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder.style_convs.1.modulated_conv.modulation.weight', (512,512), 'float32')
    self.stylegan_decoder_to_rgbs_0_modulated_conv_modulation.bias = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder.to_rgbs.0.modulated_conv.modulation.bias', (512), 'float32')
    self.stylegan_decoder_to_rgbs_0_modulated_conv_modulation.weight = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder.to_rgbs.0.modulated_conv.modulation.weight', (512,512), 'float32')
    self.stylegan_decoder_style_convs_2_modulated_conv_modulation.bias = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder.style_convs.2.modulated_conv.modulation.bias', (512), 'float32')
    self.stylegan_decoder_style_convs_2_modulated_conv_modulation.weight = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder.style_convs.2.modulated_conv.modulation.weight', (512,512), 'float32')
    self.stylegan_decoder_style_convs_3_modulated_conv_modulation.bias = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder.style_convs.3.modulated_conv.modulation.bias', (512), 'float32')
    self.stylegan_decoder_style_convs_3_modulated_conv_modulation.weight = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder.style_convs.3.modulated_conv.modulation.weight', (512,512), 'float32')
    self.stylegan_decoder_to_rgbs_1_modulated_conv_modulation.bias = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder.to_rgbs.1.modulated_conv.modulation.bias', (512), 'float32')
    self.stylegan_decoder_to_rgbs_1_modulated_conv_modulation.weight = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder.to_rgbs.1.modulated_conv.modulation.weight', (512,512), 'float32')
    self.stylegan_decoder_style_convs_4_modulated_conv_modulation.bias = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder.style_convs.4.modulated_conv.modulation.bias', (512), 'float32')
    self.stylegan_decoder_style_convs_4_modulated_conv_modulation.weight = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder.style_convs.4.modulated_conv.modulation.weight', (512,512), 'float32')
    self.stylegan_decoder_style_convs_5_modulated_conv_modulation.bias = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder.style_convs.5.modulated_conv.modulation.bias', (512), 'float32')
    self.stylegan_decoder_style_convs_5_modulated_conv_modulation.weight = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder.style_convs.5.modulated_conv.modulation.weight', (512,512), 'float32')
    self.stylegan_decoder_to_rgbs_2_modulated_conv_modulation.bias = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder.to_rgbs.2.modulated_conv.modulation.bias', (512), 'float32')
    self.stylegan_decoder_to_rgbs_2_modulated_conv_modulation.weight = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder.to_rgbs.2.modulated_conv.modulation.weight', (512,512), 'float32')
    self.stylegan_decoder_style_convs_6_modulated_conv_modulation.bias = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder.style_convs.6.modulated_conv.modulation.bias', (512), 'float32')
    self.stylegan_decoder_style_convs_6_modulated_conv_modulation.weight = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder.style_convs.6.modulated_conv.modulation.weight', (512,512), 'float32')
    self.stylegan_decoder_style_convs_7_modulated_conv_modulation.bias = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder.style_convs.7.modulated_conv.modulation.bias', (512), 'float32')
    self.stylegan_decoder_style_convs_7_modulated_conv_modulation.weight = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder.style_convs.7.modulated_conv.modulation.weight', (512,512), 'float32')
    self.stylegan_decoder_to_rgbs_3_modulated_conv_modulation.bias = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder.to_rgbs.3.modulated_conv.modulation.bias', (512), 'float32')
    self.stylegan_decoder_to_rgbs_3_modulated_conv_modulation.weight = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder.to_rgbs.3.modulated_conv.modulation.weight', (512,512), 'float32')
    self.stylegan_decoder_style_convs_8_modulated_conv_modulation.bias = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder.style_convs.8.modulated_conv.modulation.bias', (512), 'float32')
    self.stylegan_decoder_style_convs_8_modulated_conv_modulation.weight = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder.style_convs.8.modulated_conv.modulation.weight', (512,512), 'float32')
    self.stylegan_decoder_style_convs_9_modulated_conv_modulation.bias = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder.style_convs.9.modulated_conv.modulation.bias', (256), 'float32')
    self.stylegan_decoder_style_convs_9_modulated_conv_modulation.weight = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder.style_convs.9.modulated_conv.modulation.weight', (256,512), 'float32')
    self.stylegan_decoder_to_rgbs_4_modulated_conv_modulation.bias = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder.to_rgbs.4.modulated_conv.modulation.bias', (256), 'float32')
    self.stylegan_decoder_to_rgbs_4_modulated_conv_modulation.weight = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder.to_rgbs.4.modulated_conv.modulation.weight', (256,512), 'float32')
    self.stylegan_decoder_style_convs_10_modulated_conv_modulation.bias = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder.style_convs.10.modulated_conv.modulation.bias', (256), 'float32')
    self.stylegan_decoder_style_convs_10_modulated_conv_modulation.weight = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder.style_convs.10.modulated_conv.modulation.weight', (256,512), 'float32')
    self.stylegan_decoder_style_convs_11_modulated_conv_modulation.bias = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder.style_convs.11.modulated_conv.modulation.bias', (128), 'float32')
    self.stylegan_decoder_style_convs_11_modulated_conv_modulation.weight = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder.style_convs.11.modulated_conv.modulation.weight', (128,512), 'float32')
    self.stylegan_decoder_to_rgbs_5_modulated_conv_modulation.bias = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder.to_rgbs.5.modulated_conv.modulation.bias', (128), 'float32')
    self.stylegan_decoder_to_rgbs_5_modulated_conv_modulation.weight = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder.to_rgbs.5.modulated_conv.modulation.weight', (128,512), 'float32')
    self.stylegan_decoder_style_convs_12_modulated_conv_modulation.bias = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder.style_convs.12.modulated_conv.modulation.bias', (128), 'float32')
    self.stylegan_decoder_style_convs_12_modulated_conv_modulation.weight = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder.style_convs.12.modulated_conv.modulation.weight', (128,512), 'float32')
    self.stylegan_decoder_style_convs_13_modulated_conv_modulation.bias = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder.style_convs.13.modulated_conv.modulation.bias', (64), 'float32')
    self.stylegan_decoder_style_convs_13_modulated_conv_modulation.weight = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder.style_convs.13.modulated_conv.modulation.weight', (64,512), 'float32')
    self.stylegan_decoder_to_rgbs_6_modulated_conv_modulation.bias = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder.to_rgbs.6.modulated_conv.modulation.bias', (64), 'float32')
    self.stylegan_decoder_to_rgbs_6_modulated_conv_modulation.weight = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder.to_rgbs.6.modulated_conv.modulation.weight', (64,512), 'float32')
    self.weight = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder_constant_input.weight', (1,512,4,4), 'float32')
    self.bias = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder_style_conv1.bias', (1,512,1,1), 'float32')
    self.weight = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder_style_conv1.weight', (1), 'float32')
    self.weight = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder_style_conv1_modulated_conv.weight', (1,512,512,3,3), 'float32')
    self.bias = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder_to_rgb1.bias', (1,3,1,1), 'float32')
    self.weight = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder_to_rgb1_modulated_conv.weight', (1,3,512,1,1), 'float32')
    self.bias = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder_style_convs_0.bias', (1,512,1,1), 'float32')
    self.weight = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder_style_convs_0.weight', (1), 'float32')
    self.weight = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder_style_convs_0_modulated_conv.weight', (1,512,512,3,3), 'float32')
    self.bias = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder_style_convs_1.bias', (1,512,1,1), 'float32')
    self.weight = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder_style_convs_1.weight', (1), 'float32')
    self.weight = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder_style_convs_1_modulated_conv.weight', (1,512,512,3,3), 'float32')
    self.bias = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder_to_rgbs_0.bias', (1,3,1,1), 'float32')
    self.weight = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder_to_rgbs_0_modulated_conv.weight', (1,3,512,1,1), 'float32')
    self.bias = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder_style_convs_2.bias', (1,512,1,1), 'float32')
    self.weight = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder_style_convs_2.weight', (1), 'float32')
    self.weight = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder_style_convs_2_modulated_conv.weight', (1,512,512,3,3), 'float32')
    self.bias = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder_style_convs_3.bias', (1,512,1,1), 'float32')
    self.weight = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder_style_convs_3.weight', (1), 'float32')
    self.weight = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder_style_convs_3_modulated_conv.weight', (1,512,512,3,3), 'float32')
    self.bias = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder_to_rgbs_1.bias', (1,3,1,1), 'float32')
    self.weight = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder_to_rgbs_1_modulated_conv.weight', (1,3,512,1,1), 'float32')
    self.bias = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder_style_convs_4.bias', (1,512,1,1), 'float32')
    self.weight = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder_style_convs_4.weight', (1), 'float32')
    self.weight = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder_style_convs_4_modulated_conv.weight', (1,512,512,3,3), 'float32')
    self.bias = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder_style_convs_5.bias', (1,512,1,1), 'float32')
    self.weight = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder_style_convs_5.weight', (1), 'float32')
    self.weight = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder_style_convs_5_modulated_conv.weight', (1,512,512,3,3), 'float32')
    self.bias = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder_to_rgbs_2.bias', (1,3,1,1), 'float32')
    self.weight = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder_to_rgbs_2_modulated_conv.weight', (1,3,512,1,1), 'float32')
    self.bias = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder_style_convs_6.bias', (1,512,1,1), 'float32')
    self.weight = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder_style_convs_6.weight', (1), 'float32')
    self.weight = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder_style_convs_6_modulated_conv.weight', (1,512,512,3,3), 'float32')
    self.bias = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder_style_convs_7.bias', (1,512,1,1), 'float32')
    self.weight = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder_style_convs_7.weight', (1), 'float32')
    self.weight = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder_style_convs_7_modulated_conv.weight', (1,512,512,3,3), 'float32')
    self.bias = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder_to_rgbs_3.bias', (1,3,1,1), 'float32')
    self.weight = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder_to_rgbs_3_modulated_conv.weight', (1,3,512,1,1), 'float32')
    self.bias = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder_style_convs_8.bias', (1,256,1,1), 'float32')
    self.weight = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder_style_convs_8.weight', (1), 'float32')
    self.weight = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder_style_convs_8_modulated_conv.weight', (1,256,512,3,3), 'float32')
    self.bias = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder_style_convs_9.bias', (1,256,1,1), 'float32')
    self.weight = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder_style_convs_9.weight', (1), 'float32')
    self.weight = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder_style_convs_9_modulated_conv.weight', (1,256,256,3,3), 'float32')
    self.bias = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder_to_rgbs_4.bias', (1,3,1,1), 'float32')
    self.weight = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder_to_rgbs_4_modulated_conv.weight', (1,3,256,1,1), 'float32')
    self.bias = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder_style_convs_10.bias', (1,128,1,1), 'float32')
    self.weight = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder_style_convs_10.weight', (1), 'float32')
    self.weight = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder_style_convs_10_modulated_conv.weight', (1,128,256,3,3), 'float32')
    self.bias = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder_style_convs_11.bias', (1,128,1,1), 'float32')
    self.weight = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder_style_convs_11.weight', (1), 'float32')
    self.weight = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder_style_convs_11_modulated_conv.weight', (1,128,128,3,3), 'float32')
    self.bias = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder_to_rgbs_5.bias', (1,3,1,1), 'float32')
    self.weight = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder_to_rgbs_5_modulated_conv.weight', (1,3,128,1,1), 'float32')
    self.bias = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder_style_convs_12.bias', (1,64,1,1), 'float32')
    self.weight = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder_style_convs_12.weight', (1), 'float32')
    self.weight = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder_style_convs_12_modulated_conv.weight', (1,64,128,3,3), 'float32')
    self.bias = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder_style_convs_13.bias', (1,64,1,1), 'float32')
    self.weight = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder_style_convs_13.weight', (1), 'float32')
    self.weight = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder_style_convs_13_modulated_conv.weight', (1,64,64,3,3), 'float32')
    self.bias = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder_to_rgbs_6.bias', (1,3,1,1), 'float32')
    self.weight = self.load_pnnx_bin_as_parameter(archive, 'stylegan_decoder_to_rgbs_6_modulated_conv.weight', (1,3,64,1,1), 'float32')
    archive.close()

def load_pnnx_bin_as_parameter(self, archive, key, shape, dtype, requires_grad=True):
    return nn.Parameter(self.load_pnnx_bin_as_tensor(archive, key, shape, dtype), requires_grad)

def load_pnnx_bin_as_tensor(self, archive, key, shape, dtype):
    _, tmppath = tempfile.mkstemp()
    tmpf = open(tmppath, 'wb')
    with archive.open(key) as keyfile:
        tmpf.write(keyfile.read())
    tmpf.close()
    m = np.memmap(tmppath, dtype=dtype, mode='r', shape=shape).copy()
    os.remove(tmppath)
    return torch.from_numpy(m)

def forward(self, v_0):
    v_1 = None
    v_2 = 2.000000e-01
    v_3 = self.conv_body_first(v_0)
    v_4 = aten::leaky_relu_(v_3, v_2)
    v_5 = 2.000000e-01
    v_6 = self.conv_body_down_0_conv1(v_4)
    v_7 = aten::leaky_relu_(v_6, v_5)
    v_8 = F.upsample(input=v_7, align_corners=False, mode='bilinear', scale_factor=(0.500000,0.500000))
    v_9 = self.conv_body_down_0_conv2(v_8)
    v_10 = 2.000000e-01
    v_11 = aten::leaky_relu_(v_9, v_10)
    v_12 = F.upsample(input=v_4, align_corners=False, mode='bilinear', scale_factor=(0.500000,0.500000))
    v_13 = self.conv_body_down_0_skip(v_12)
    v_14 = (v_11 + v_13)
    v_15 = 2.000000e-01
    v_16 = self.conv_body_down_1_conv1(v_14)
    v_17 = aten::leaky_relu_(v_16, v_15)
    v_18 = F.upsample(input=v_17, align_corners=False, mode='bilinear', scale_factor=(0.500000,0.500000))
    v_19 = self.conv_body_down_1_conv2(v_18)
    v_20 = 2.000000e-01
    v_21 = aten::leaky_relu_(v_19, v_20)
    v_22 = F.upsample(input=v_14, align_corners=False, mode='bilinear', scale_factor=(0.500000,0.500000))
    v_23 = self.conv_body_down_1_skip(v_22)
    v_24 = (v_21 + v_23)
    v_25 = 2.000000e-01
    v_26 = self.conv_body_down_2_conv1(v_24)
    v_27 = aten::leaky_relu_(v_26, v_25)
    v_28 = F.upsample(input=v_27, align_corners=False, mode='bilinear', scale_factor=(0.500000,0.500000))
    v_29 = self.conv_body_down_2_conv2(v_28)
    v_30 = 2.000000e-01
    v_31 = aten::leaky_relu_(v_29, v_30)
    v_32 = F.upsample(input=v_24, align_corners=False, mode='bilinear', scale_factor=(0.500000,0.500000))
    v_33 = self.conv_body_down_2_skip(v_32)
    v_34 = (v_31 + v_33)
    v_35 = 2.000000e-01
    v_36 = self.conv_body_down_3_conv1(v_34)
    v_37 = aten::leaky_relu_(v_36, v_35)
    v_38 = F.upsample(input=v_37, align_corners=False, mode='bilinear', scale_factor=(0.500000,0.500000))
    v_39 = self.conv_body_down_3_conv2(v_38)
    v_40 = 2.000000e-01
    v_41 = aten::leaky_relu_(v_39, v_40)
    v_42 = F.upsample(input=v_34, align_corners=False, mode='bilinear', scale_factor=(0.500000,0.500000))
    v_43 = self.conv_body_down_3_skip(v_42)
    v_44 = (v_41 + v_43)
    v_45 = 2.000000e-01
    v_46 = self.conv_body_down_4_conv1(v_44)
    v_47 = aten::leaky_relu_(v_46, v_45)
    v_48 = F.upsample(input=v_47, align_corners=False, mode='bilinear', scale_factor=(0.500000,0.500000))
    v_49 = self.conv_body_down_4_conv2(v_48)
    v_50 = 2.000000e-01
    v_51 = aten::leaky_relu_(v_49, v_50)
    v_52 = F.upsample(input=v_44, align_corners=False, mode='bilinear', scale_factor=(0.500000,0.500000))
    v_53 = self.conv_body_down_4_skip(v_52)
    v_54 = (v_51 + v_53)
    v_55 = 2.000000e-01
    v_56 = self.conv_body_down_5_conv1(v_54)
    v_57 = aten::leaky_relu_(v_56, v_55)
    v_58 = F.upsample(input=v_57, align_corners=False, mode='bilinear', scale_factor=(0.500000,0.500000))
    v_59 = self.conv_body_down_5_conv2(v_58)
    v_60 = 2.000000e-01
    v_61 = aten::leaky_relu_(v_59, v_60)
    v_62 = F.upsample(input=v_54, align_corners=False, mode='bilinear', scale_factor=(0.500000,0.500000))
    v_63 = self.conv_body_down_5_skip(v_62)
    v_64 = (v_61 + v_63)
    v_65 = 2.000000e-01
    v_66 = self.conv_body_down_6_conv1(v_64)
    v_67 = aten::leaky_relu_(v_66, v_65)
    v_68 = F.upsample(input=v_67, align_corners=False, mode='bilinear', scale_factor=(0.500000,0.500000))
    v_69 = self.conv_body_down_6_conv2(v_68)
    v_70 = 2.000000e-01
    v_71 = aten::leaky_relu_(v_69, v_70)
    v_72 = F.upsample(input=v_64, align_corners=False, mode='bilinear', scale_factor=(0.500000,0.500000))
    v_73 = self.conv_body_down_6_skip(v_72)
    v_74 = (v_71 + v_73)
    v_75 = self.final_conv(v_74)
    v_76 = 2.000000e-01
    v_77 = aten::leaky_relu_(v_75, v_76)
    v_78 = v_77.view(1, -1)
    v_79 = self.final_linear(v_78)
    v_80 = (v_77 + v_74)
    v_81 = 2.000000e-01
    v_82 = self.conv_body_up_0_conv1(v_80)
    v_83 = aten::leaky_relu_(v_82, v_81)
    v_84 = F.upsample(input=v_83, align_corners=False, mode='bilinear', scale_factor=(2.000000,2.000000))
    v_85 = self.conv_body_up_0_conv2(v_84)
    v_86 = 2.000000e-01
    v_87 = aten::leaky_relu_(v_85, v_86)
    v_88 = F.upsample(input=v_80, align_corners=False, mode='bilinear', scale_factor=(2.000000,2.000000))
    v_89 = self.conv_body_up_0_skip(v_88)
    v_90 = (v_87 + v_89)
    v_91 = self.condition_scale_0_0(v_90)
    v_92 = self.condition_scale_0_1(v_91)
    v_93 = self.condition_scale_0_2(v_92)
    v_94 = aten::clone(v_93, v_1)
    v_95 = self.condition_shift_0_0(v_90)
    v_96 = self.condition_shift_0_1(v_95)
    v_97 = self.condition_shift_0_2(v_96)
    v_98 = None
    v_99 = aten::clone(v_97, v_98)
    v_100 = self.toRGB_0(v_90)
    v_101 = (v_90 + v_64)
    v_102 = 2.000000e-01
    v_103 = self.conv_body_up_1_conv1(v_101)
    v_104 = aten::leaky_relu_(v_103, v_102)
    v_105 = F.upsample(input=v_104, align_corners=False, mode='bilinear', scale_factor=(2.000000,2.000000))
    v_106 = self.conv_body_up_1_conv2(v_105)
    v_107 = 2.000000e-01
    v_108 = aten::leaky_relu_(v_106, v_107)
    v_109 = F.upsample(input=v_101, align_corners=False, mode='bilinear', scale_factor=(2.000000,2.000000))
    v_110 = self.conv_body_up_1_skip(v_109)
    v_111 = (v_108 + v_110)
    v_112 = self.condition_scale_1_0(v_111)
    v_113 = self.condition_scale_1_1(v_112)
    v_114 = self.condition_scale_1_2(v_113)
    v_115 = None
    v_116 = aten::clone(v_114, v_115)
    v_117 = self.condition_shift_1_0(v_111)
    v_118 = self.condition_shift_1_1(v_117)
    v_119 = self.condition_shift_1_2(v_118)
    v_120 = None
    v_121 = aten::clone(v_119, v_120)
    v_122 = self.toRGB_1(v_111)
    v_123 = (v_111 + v_54)
    v_124 = 2.000000e-01
    v_125 = self.conv_body_up_2_conv1(v_123)
    v_126 = aten::leaky_relu_(v_125, v_124)
    v_127 = F.upsample(input=v_126, align_corners=False, mode='bilinear', scale_factor=(2.000000,2.000000))
    v_128 = self.conv_body_up_2_conv2(v_127)
    v_129 = 2.000000e-01
    v_130 = aten::leaky_relu_(v_128, v_129)
    v_131 = F.upsample(input=v_123, align_corners=False, mode='bilinear', scale_factor=(2.000000,2.000000))
    v_132 = self.conv_body_up_2_skip(v_131)
    v_133 = (v_130 + v_132)
    v_134 = self.condition_scale_2_0(v_133)
    v_135 = self.condition_scale_2_1(v_134)
    v_136 = self.condition_scale_2_2(v_135)
    v_137 = None
    v_138 = aten::clone(v_136, v_137)
    v_139 = self.condition_shift_2_0(v_133)
    v_140 = self.condition_shift_2_1(v_139)
    v_141 = self.condition_shift_2_2(v_140)
    v_142 = None
    v_143 = aten::clone(v_141, v_142)
    v_144 = self.toRGB_2(v_133)
    v_145 = (v_133 + v_44)
    v_146 = 2.000000e-01
    v_147 = self.conv_body_up_3_conv1(v_145)
    v_148 = aten::leaky_relu_(v_147, v_146)
    v_149 = F.upsample(input=v_148, align_corners=False, mode='bilinear', scale_factor=(2.000000,2.000000))
    v_150 = self.conv_body_up_3_conv2(v_149)
    v_151 = 2.000000e-01
    v_152 = aten::leaky_relu_(v_150, v_151)
    v_153 = F.upsample(input=v_145, align_corners=False, mode='bilinear', scale_factor=(2.000000,2.000000))
    v_154 = self.conv_body_up_3_skip(v_153)
    v_155 = (v_152 + v_154)
    v_156 = self.condition_scale_3_0(v_155)
    v_157 = self.condition_scale_3_1(v_156)
    v_158 = self.condition_scale_3_2(v_157)
    v_159 = None
    v_160 = aten::clone(v_158, v_159)
    v_161 = self.condition_shift_3_0(v_155)
    v_162 = self.condition_shift_3_1(v_161)
    v_163 = self.condition_shift_3_2(v_162)
    v_164 = None
    v_165 = aten::clone(v_163, v_164)
    v_166 = self.toRGB_3(v_155)
    v_167 = (v_155 + v_34)
    v_168 = 2.000000e-01
    v_169 = self.conv_body_up_4_conv1(v_167)
    v_170 = aten::leaky_relu_(v_169, v_168)
    v_171 = F.upsample(input=v_170, align_corners=False, mode='bilinear', scale_factor=(2.000000,2.000000))
    v_172 = self.conv_body_up_4_conv2(v_171)
    v_173 = 2.000000e-01
    v_174 = aten::leaky_relu_(v_172, v_173)
    v_175 = F.upsample(input=v_167, align_corners=False, mode='bilinear', scale_factor=(2.000000,2.000000))
    v_176 = self.conv_body_up_4_skip(v_175)
    v_177 = (v_174 + v_176)
    v_178 = self.condition_scale_4_0(v_177)
    v_179 = self.condition_scale_4_1(v_178)
    v_180 = self.condition_scale_4_2(v_179)
    v_181 = None
    v_182 = aten::clone(v_180, v_181)
    v_183 = self.condition_shift_4_0(v_177)
    v_184 = self.condition_shift_4_1(v_183)
    v_185 = self.condition_shift_4_2(v_184)
    v_186 = None
    v_187 = aten::clone(v_185, v_186)
    v_188 = self.toRGB_4(v_177)
    v_189 = (v_177 + v_24)
    v_190 = 2.000000e-01
    v_191 = self.conv_body_up_5_conv1(v_189)
    v_192 = aten::leaky_relu_(v_191, v_190)
    v_193 = F.upsample(input=v_192, align_corners=False, mode='bilinear', scale_factor=(2.000000,2.000000))
    v_194 = self.conv_body_up_5_conv2(v_193)
    v_195 = 2.000000e-01
    v_196 = aten::leaky_relu_(v_194, v_195)
    v_197 = F.upsample(input=v_189, align_corners=False, mode='bilinear', scale_factor=(2.000000,2.000000))
    v_198 = self.conv_body_up_5_skip(v_197)
    v_199 = (v_196 + v_198)
    v_200 = self.condition_scale_5_0(v_199)
    v_201 = self.condition_scale_5_1(v_200)
    v_202 = self.condition_scale_5_2(v_201)
    v_203 = None
    v_204 = aten::clone(v_202, v_203)
    v_205 = self.condition_shift_5_0(v_199)
    v_206 = self.condition_shift_5_1(v_205)
    v_207 = self.condition_shift_5_2(v_206)
    v_208 = None
    v_209 = aten::clone(v_207, v_208)
    v_210 = self.toRGB_5(v_199)
    v_211 = (v_199 + v_14)
    v_212 = 2.000000e-01
    v_213 = self.conv_body_up_6_conv1(v_211)
    v_214 = aten::leaky_relu_(v_213, v_212)
    v_215 = F.upsample(input=v_214, align_corners=False, mode='bilinear', scale_factor=(2.000000,2.000000))
    v_216 = self.conv_body_up_6_conv2(v_215)
    v_217 = 2.000000e-01
    v_218 = aten::leaky_relu_(v_216, v_217)
    v_219 = F.upsample(input=v_211, align_corners=False, mode='bilinear', scale_factor=(2.000000,2.000000))
    v_220 = self.conv_body_up_6_skip(v_219)
    v_221 = (v_218 + v_220)
    v_222 = self.condition_scale_6_0(v_221)
    v_223 = self.condition_scale_6_1(v_222)
    v_224 = self.condition_scale_6_2(v_223)
    v_225 = None
    v_226 = aten::clone(v_224, v_225)
    v_227 = self.condition_shift_6_0(v_221)
    v_228 = self.condition_shift_6_1(v_227)
    v_229 = self.condition_shift_6_2(v_228)
    v_230 = None
    v_231 = aten::clone(v_229, v_230)
    v_232 = self.toRGB_6(v_221)
    v_233 = v_79.view(1, -1, 512)
    v_234 = self.weight
    v_235 = None
    v_236 = 1.000000e+00
    v_237 = 0.000000e+00
    v_238 = self.bias
    v_239 = self.weight
    v_240 = self.weight
    v_241 = v_234.repeat(1, 1, 1, 1)
    v_242 = v_233.select(dim=1, index=0)
    v_243 = self.stylegan_decoder_style_conv1_modulated_conv_modulation(v_242)
    v_244 = v_243.view(1, 1, 512, 1, 1)
    v_245 = (v_240 * v_244)
    v_246 = v_245.pow(2)
    v_247 = torch.sum(input=v_246, dim=(2,3,4), keepdim=False)
    v_248 = torch.rsqrt((v_247 + 1.000000e-08))
    v_249 = v_248.view(1, 512, 1, 1, 1)
    v_250 = (v_245 * v_249)
    v_251 = v_250.view(512, 512, 3, 3)
    v_252 = F.conv2d(input=v_241, weight=v_251, bias=None, dilation=(1,1), groups=1, padding=(1,1), stride=(1,1))
    v_253 = (v_252 * 1.414214e+00)
    v_254 = v_253.new_empty(size=(1,1,4,4))
    v_255 = aten::normal_(v_254, v_237, v_236, v_235)
    v_256 = ((v_253 + (v_239 * v_255)) + v_238)
    v_257 = self.stylegan_decoder_style_conv1_activate(v_256)
    v_258 = self.bias
    v_259 = self.weight
    v_260 = v_233.select(dim=1, index=1)
    v_261 = self.stylegan_decoder_to_rgb1_modulated_conv_modulation(v_260)
    v_262 = v_261.view(1, 1, 512, 1, 1)
    v_263 = (v_259 * v_262)
    v_264 = v_263.view(3, 512, 1, 1)
    v_265 = F.conv2d(input=v_257, weight=v_264, bias=None, dilation=(1,1), groups=1, padding=(0,0), stride=(1,1))
    v_266 = (v_265 + v_258)
    v_267 = None
    v_268 = 1.000000e+00
    v_269 = 0.000000e+00
    v_270 = self.bias
    v_271 = self.weight
    v_272 = self.weight
    v_273 = v_233.select(dim=1, index=1)
    v_274 = self.stylegan_decoder_style_convs_0_modulated_conv_modulation(v_273)
    v_275 = v_274.view(1, 1, 512, 1, 1)
    v_276 = (v_272 * v_275)
    v_277 = v_276.pow(2)
    v_278 = torch.sum(input=v_277, dim=(2,3,4), keepdim=False)
    v_279 = torch.rsqrt((v_278 + 1.000000e-08))
    v_280 = v_279.view(1, 512, 1, 1, 1)
    v_281 = (v_276 * v_280)
    v_282 = F.upsample(input=v_257, align_corners=False, mode='bilinear', scale_factor=(2.000000,2.000000))
    v_283 = v_281.view(512, 512, 3, 3)
    v_284 = F.conv2d(input=v_282, weight=v_283, bias=None, dilation=(1,1), groups=1, padding=(1,1), stride=(1,1))
    v_285 = (v_284 * 1.414214e+00)
    v_286 = v_285.new_empty(size=(1,1,8,8))
    v_287 = aten::normal_(v_286, v_269, v_268, v_267)
    v_288 = ((v_285 + (v_271 * v_287)) + v_270)
    v_289 = self.stylegan_decoder_style_convs_0_activate(v_288)
    v_290, v_291 = torch.split(tensor=v_289, dim=1, split_size_or_sections=256)
    v_292 = ((v_291 * v_94) + v_99)
    v_293 = None
    v_294 = 1.000000e+00
    v_295 = 0.000000e+00
    v_296 = self.bias
    v_297 = self.weight
    v_298 = self.weight
    v_299 = torch.cat((v_290, v_292), dim=1)
    v_300 = v_233.select(dim=1, index=2)
    v_301 = self.stylegan_decoder_style_convs_1_modulated_conv_modulation(v_300)
    v_302 = v_301.view(1, 1, 512, 1, 1)
    v_303 = (v_298 * v_302)
    v_304 = v_303.pow(2)
    v_305 = torch.sum(input=v_304, dim=(2,3,4), keepdim=False)
    v_306 = torch.rsqrt((v_305 + 1.000000e-08))
    v_307 = v_306.view(1, 512, 1, 1, 1)
    v_308 = (v_303 * v_307)
    v_309 = v_308.view(512, 512, 3, 3)
    v_310 = F.conv2d(input=v_299, weight=v_309, bias=None, dilation=(1,1), groups=1, padding=(1,1), stride=(1,1))
    v_311 = (v_310 * 1.414214e+00)
    v_312 = v_311.new_empty(size=(1,1,8,8))
    v_313 = aten::normal_(v_312, v_295, v_294, v_293)
    v_314 = ((v_311 + (v_297 * v_313)) + v_296)
    v_315 = self.stylegan_decoder_style_convs_1_activate(v_314)
    v_316 = self.bias
    v_317 = self.weight
    v_318 = v_233.select(dim=1, index=3)
    v_319 = self.stylegan_decoder_to_rgbs_0_modulated_conv_modulation(v_318)
    v_320 = v_319.view(1, 1, 512, 1, 1)
    v_321 = (v_317 * v_320)
    v_322 = v_321.view(3, 512, 1, 1)
    v_323 = F.conv2d(input=v_315, weight=v_322, bias=None, dilation=(1,1), groups=1, padding=(0,0), stride=(1,1))
    v_324 = F.upsample(input=v_266, align_corners=False, mode='bilinear', scale_factor=(2.000000,2.000000))
    v_325 = ((v_323 + v_316) + v_324)
    v_326 = None
    v_327 = 1.000000e+00
    v_328 = 0.000000e+00
    v_329 = self.bias
    v_330 = self.weight
    v_331 = self.weight
    v_332 = v_233.select(dim=1, index=3)
    v_333 = self.stylegan_decoder_style_convs_2_modulated_conv_modulation(v_332)
    v_334 = v_333.view(1, 1, 512, 1, 1)
    v_335 = (v_331 * v_334)
    v_336 = v_335.pow(2)
    v_337 = torch.sum(input=v_336, dim=(2,3,4), keepdim=False)
    v_338 = torch.rsqrt((v_337 + 1.000000e-08))
    v_339 = v_338.view(1, 512, 1, 1, 1)
    v_340 = (v_335 * v_339)
    v_341 = F.upsample(input=v_315, align_corners=False, mode='bilinear', scale_factor=(2.000000,2.000000))
    v_342 = v_340.view(512, 512, 3, 3)
    v_343 = F.conv2d(input=v_341, weight=v_342, bias=None, dilation=(1,1), groups=1, padding=(1,1), stride=(1,1))
    v_344 = (v_343 * 1.414214e+00)
    v_345 = v_344.new_empty(size=(1,1,16,16))
    v_346 = aten::normal_(v_345, v_328, v_327, v_326)
    v_347 = ((v_344 + (v_330 * v_346)) + v_329)
    v_348 = self.stylegan_decoder_style_convs_2_activate(v_347)
    v_349, v_350 = torch.split(tensor=v_348, dim=1, split_size_or_sections=256)
    v_351 = ((v_350 * v_116) + v_121)
    v_352 = None
    v_353 = 1.000000e+00
    v_354 = 0.000000e+00
    v_355 = self.bias
    v_356 = self.weight
    v_357 = self.weight
    v_358 = torch.cat((v_349, v_351), dim=1)
    v_359 = v_233.select(dim=1, index=4)
    v_360 = self.stylegan_decoder_style_convs_3_modulated_conv_modulation(v_359)
    v_361 = v_360.view(1, 1, 512, 1, 1)
    v_362 = (v_357 * v_361)
    v_363 = v_362.pow(2)
    v_364 = torch.sum(input=v_363, dim=(2,3,4), keepdim=False)
    v_365 = torch.rsqrt((v_364 + 1.000000e-08))
    v_366 = v_365.view(1, 512, 1, 1, 1)
    v_367 = (v_362 * v_366)
    v_368 = v_367.view(512, 512, 3, 3)
    v_369 = F.conv2d(input=v_358, weight=v_368, bias=None, dilation=(1,1), groups=1, padding=(1,1), stride=(1,1))
    v_370 = (v_369 * 1.414214e+00)
    v_371 = v_370.new_empty(size=(1,1,16,16))
    v_372 = aten::normal_(v_371, v_354, v_353, v_352)
    v_373 = ((v_370 + (v_356 * v_372)) + v_355)
    v_374 = self.stylegan_decoder_style_convs_3_activate(v_373)
    v_375 = self.bias
    v_376 = self.weight
    v_377 = v_233.select(dim=1, index=5)
    v_378 = self.stylegan_decoder_to_rgbs_1_modulated_conv_modulation(v_377)
    v_379 = v_378.view(1, 1, 512, 1, 1)
    v_380 = (v_376 * v_379)
    v_381 = v_380.view(3, 512, 1, 1)
    v_382 = F.conv2d(input=v_374, weight=v_381, bias=None, dilation=(1,1), groups=1, padding=(0,0), stride=(1,1))
    v_383 = F.upsample(input=v_325, align_corners=False, mode='bilinear', scale_factor=(2.000000,2.000000))
    v_384 = ((v_382 + v_375) + v_383)
    v_385 = None
    v_386 = 1.000000e+00
    v_387 = 0.000000e+00
    v_388 = self.bias
    v_389 = self.weight
    v_390 = self.weight
    v_391 = v_233.select(dim=1, index=5)
    v_392 = self.stylegan_decoder_style_convs_4_modulated_conv_modulation(v_391)
    v_393 = v_392.view(1, 1, 512, 1, 1)
    v_394 = (v_390 * v_393)
    v_395 = v_394.pow(2)
    v_396 = torch.sum(input=v_395, dim=(2,3,4), keepdim=False)
    v_397 = torch.rsqrt((v_396 + 1.000000e-08))
    v_398 = v_397.view(1, 512, 1, 1, 1)
    v_399 = (v_394 * v_398)
    v_400 = F.upsample(input=v_374, align_corners=False, mode='bilinear', scale_factor=(2.000000,2.000000))
    v_401 = v_399.view(512, 512, 3, 3)
    v_402 = F.conv2d(input=v_400, weight=v_401, bias=None, dilation=(1,1), groups=1, padding=(1,1), stride=(1,1))
    v_403 = (v_402 * 1.414214e+00)
    v_404 = v_403.new_empty(size=(1,1,32,32))
    v_405 = aten::normal_(v_404, v_387, v_386, v_385)
    v_406 = ((v_403 + (v_389 * v_405)) + v_388)
    v_407 = self.stylegan_decoder_style_convs_4_activate(v_406)
    v_408, v_409 = torch.split(tensor=v_407, dim=1, split_size_or_sections=256)
    v_410 = ((v_409 * v_138) + v_143)
    v_411 = None
    v_412 = 1.000000e+00
    v_413 = 0.000000e+00
    v_414 = self.bias
    v_415 = self.weight
    v_416 = self.weight
    v_417 = torch.cat((v_408, v_410), dim=1)
    v_418 = v_233.select(dim=1, index=6)
    v_419 = self.stylegan_decoder_style_convs_5_modulated_conv_modulation(v_418)
    v_420 = v_419.view(1, 1, 512, 1, 1)
    v_421 = (v_416 * v_420)
    v_422 = v_421.pow(2)
    v_423 = torch.sum(input=v_422, dim=(2,3,4), keepdim=False)
    v_424 = torch.rsqrt((v_423 + 1.000000e-08))
    v_425 = v_424.view(1, 512, 1, 1, 1)
    v_426 = (v_421 * v_425)
    v_427 = v_426.view(512, 512, 3, 3)
    v_428 = F.conv2d(input=v_417, weight=v_427, bias=None, dilation=(1,1), groups=1, padding=(1,1), stride=(1,1))
    v_429 = (v_428 * 1.414214e+00)
    v_430 = v_429.new_empty(size=(1,1,32,32))
    v_431 = aten::normal_(v_430, v_413, v_412, v_411)
    v_432 = ((v_429 + (v_415 * v_431)) + v_414)
    v_433 = self.stylegan_decoder_style_convs_5_activate(v_432)
    v_434 = self.bias
    v_435 = self.weight
    v_436 = v_233.select(dim=1, index=7)
    v_437 = self.stylegan_decoder_to_rgbs_2_modulated_conv_modulation(v_436)
    v_438 = v_437.view(1, 1, 512, 1, 1)
    v_439 = (v_435 * v_438)
    v_440 = v_439.view(3, 512, 1, 1)
    v_441 = F.conv2d(input=v_433, weight=v_440, bias=None, dilation=(1,1), groups=1, padding=(0,0), stride=(1,1))
    v_442 = F.upsample(input=v_384, align_corners=False, mode='bilinear', scale_factor=(2.000000,2.000000))
    v_443 = ((v_441 + v_434) + v_442)
    v_444 = None
    v_445 = 1.000000e+00
    v_446 = 0.000000e+00
    v_447 = self.bias
    v_448 = self.weight
    v_449 = self.weight
    v_450 = v_233.select(dim=1, index=7)
    v_451 = self.stylegan_decoder_style_convs_6_modulated_conv_modulation(v_450)
    v_452 = v_451.view(1, 1, 512, 1, 1)
    v_453 = (v_449 * v_452)
    v_454 = v_453.pow(2)
    v_455 = torch.sum(input=v_454, dim=(2,3,4), keepdim=False)
    v_456 = torch.rsqrt((v_455 + 1.000000e-08))
    v_457 = v_456.view(1, 512, 1, 1, 1)
    v_458 = (v_453 * v_457)
    v_459 = F.upsample(input=v_433, align_corners=False, mode='bilinear', scale_factor=(2.000000,2.000000))
    v_460 = v_458.view(512, 512, 3, 3)
    v_461 = F.conv2d(input=v_459, weight=v_460, bias=None, dilation=(1,1), groups=1, padding=(1,1), stride=(1,1))
    v_462 = (v_461 * 1.414214e+00)
    v_463 = v_462.new_empty(size=(1,1,64,64))
    v_464 = aten::normal_(v_463, v_446, v_445, v_444)
    v_465 = ((v_462 + (v_448 * v_464)) + v_447)
    v_466 = self.stylegan_decoder_style_convs_6_activate(v_465)
    v_467, v_468 = torch.split(tensor=v_466, dim=1, split_size_or_sections=256)
    v_469 = ((v_468 * v_160) + v_165)
    v_470 = None
    v_471 = 1.000000e+00
    v_472 = 0.000000e+00
    v_473 = self.bias
    v_474 = self.weight
    v_475 = self.weight
    v_476 = torch.cat((v_467, v_469), dim=1)
    v_477 = v_233.select(dim=1, index=8)
    v_478 = self.stylegan_decoder_style_convs_7_modulated_conv_modulation(v_477)
    v_479 = v_478.view(1, 1, 512, 1, 1)
    v_480 = (v_475 * v_479)
    v_481 = v_480.pow(2)
    v_482 = torch.sum(input=v_481, dim=(2,3,4), keepdim=False)
    v_483 = torch.rsqrt((v_482 + 1.000000e-08))
    v_484 = v_483.view(1, 512, 1, 1, 1)
    v_485 = (v_480 * v_484)
    v_486 = v_485.view(512, 512, 3, 3)
    v_487 = F.conv2d(input=v_476, weight=v_486, bias=None, dilation=(1,1), groups=1, padding=(1,1), stride=(1,1))
    v_488 = (v_487 * 1.414214e+00)
    v_489 = v_488.new_empty(size=(1,1,64,64))
    v_490 = aten::normal_(v_489, v_472, v_471, v_470)
    v_491 = ((v_488 + (v_474 * v_490)) + v_473)
    v_492 = self.stylegan_decoder_style_convs_7_activate(v_491)
    v_493 = self.bias
    v_494 = self.weight
    v_495 = v_233.select(dim=1, index=9)
    v_496 = self.stylegan_decoder_to_rgbs_3_modulated_conv_modulation(v_495)
    v_497 = v_496.view(1, 1, 512, 1, 1)
    v_498 = (v_494 * v_497)
    v_499 = v_498.view(3, 512, 1, 1)
    v_500 = F.conv2d(input=v_492, weight=v_499, bias=None, dilation=(1,1), groups=1, padding=(0,0), stride=(1,1))
    v_501 = F.upsample(input=v_443, align_corners=False, mode='bilinear', scale_factor=(2.000000,2.000000))
    v_502 = ((v_500 + v_493) + v_501)
    v_503 = None
    v_504 = 1.000000e+00
    v_505 = 0.000000e+00
    v_506 = self.bias
    v_507 = self.weight
    v_508 = self.weight
    v_509 = v_233.select(dim=1, index=9)
    v_510 = self.stylegan_decoder_style_convs_8_modulated_conv_modulation(v_509)
    v_511 = v_510.view(1, 1, 512, 1, 1)
    v_512 = (v_508 * v_511)
    v_513 = v_512.pow(2)
    v_514 = torch.sum(input=v_513, dim=(2,3,4), keepdim=False)
    v_515 = torch.rsqrt((v_514 + 1.000000e-08))
    v_516 = v_515.view(1, 256, 1, 1, 1)
    v_517 = (v_512 * v_516)
    v_518 = F.upsample(input=v_492, align_corners=False, mode='bilinear', scale_factor=(2.000000,2.000000))
    v_519 = v_517.view(256, 512, 3, 3)
    v_520 = F.conv2d(input=v_518, weight=v_519, bias=None, dilation=(1,1), groups=1, padding=(1,1), stride=(1,1))
    v_521 = (v_520 * 1.414214e+00)
    v_522 = v_521.new_empty(size=(1,1,128,128))
    v_523 = aten::normal_(v_522, v_505, v_504, v_503)
    v_524 = ((v_521 + (v_507 * v_523)) + v_506)
    v_525 = self.stylegan_decoder_style_convs_8_activate(v_524)
    v_526, v_527 = torch.split(tensor=v_525, dim=1, split_size_or_sections=128)
    v_528 = ((v_527 * v_182) + v_187)
    v_529 = None
    v_530 = 1.000000e+00
    v_531 = 0.000000e+00
    v_532 = self.bias
    v_533 = self.weight
    v_534 = self.weight
    v_535 = torch.cat((v_526, v_528), dim=1)
    v_536 = v_233.select(dim=1, index=10)
    v_537 = self.stylegan_decoder_style_convs_9_modulated_conv_modulation(v_536)
    v_538 = v_537.view(1, 1, 256, 1, 1)
    v_539 = (v_534 * v_538)
    v_540 = v_539.pow(2)
    v_541 = torch.sum(input=v_540, dim=(2,3,4), keepdim=False)
    v_542 = torch.rsqrt((v_541 + 1.000000e-08))
    v_543 = v_542.view(1, 256, 1, 1, 1)
    v_544 = (v_539 * v_543)
    v_545 = v_544.view(256, 256, 3, 3)
    v_546 = F.conv2d(input=v_535, weight=v_545, bias=None, dilation=(1,1), groups=1, padding=(1,1), stride=(1,1))
    v_547 = (v_546 * 1.414214e+00)
    v_548 = v_547.new_empty(size=(1,1,128,128))
    v_549 = aten::normal_(v_548, v_531, v_530, v_529)
    v_550 = ((v_547 + (v_533 * v_549)) + v_532)
    v_551 = self.stylegan_decoder_style_convs_9_activate(v_550)
    v_552 = self.bias
    v_553 = self.weight
    v_554 = v_233.select(dim=1, index=11)
    v_555 = self.stylegan_decoder_to_rgbs_4_modulated_conv_modulation(v_554)
    v_556 = v_555.view(1, 1, 256, 1, 1)
    v_557 = (v_553 * v_556)
    v_558 = v_557.view(3, 256, 1, 1)
    v_559 = F.conv2d(input=v_551, weight=v_558, bias=None, dilation=(1,1), groups=1, padding=(0,0), stride=(1,1))
    v_560 = F.upsample(input=v_502, align_corners=False, mode='bilinear', scale_factor=(2.000000,2.000000))
    v_561 = ((v_559 + v_552) + v_560)
    v_562 = None
    v_563 = 1.000000e+00
    v_564 = 0.000000e+00
    v_565 = self.bias
    v_566 = self.weight
    v_567 = self.weight
    v_568 = v_233.select(dim=1, index=11)
    v_569 = self.stylegan_decoder_style_convs_10_modulated_conv_modulation(v_568)
    v_570 = v_569.view(1, 1, 256, 1, 1)
    v_571 = (v_567 * v_570)
    v_572 = v_571.pow(2)
    v_573 = torch.sum(input=v_572, dim=(2,3,4), keepdim=False)
    v_574 = torch.rsqrt((v_573 + 1.000000e-08))
    v_575 = v_574.view(1, 128, 1, 1, 1)
    v_576 = (v_571 * v_575)
    v_577 = F.upsample(input=v_551, align_corners=False, mode='bilinear', scale_factor=(2.000000,2.000000))
    v_578 = v_576.view(128, 256, 3, 3)
    v_579 = F.conv2d(input=v_577, weight=v_578, bias=None, dilation=(1,1), groups=1, padding=(1,1), stride=(1,1))
    v_580 = (v_579 * 1.414214e+00)
    v_581 = v_580.new_empty(size=(1,1,256,256))
    v_582 = aten::normal_(v_581, v_564, v_563, v_562)
    v_583 = ((v_580 + (v_566 * v_582)) + v_565)
    v_584 = self.stylegan_decoder_style_convs_10_activate(v_583)
    v_585, v_586 = torch.split(tensor=v_584, dim=1, split_size_or_sections=64)
    v_587 = ((v_586 * v_204) + v_209)
    v_588 = None
    v_589 = 1.000000e+00
    v_590 = 0.000000e+00
    v_591 = self.bias
    v_592 = self.weight
    v_593 = self.weight
    v_594 = torch.cat((v_585, v_587), dim=1)
    v_595 = v_233.select(dim=1, index=12)
    v_596 = self.stylegan_decoder_style_convs_11_modulated_conv_modulation(v_595)
    v_597 = v_596.view(1, 1, 128, 1, 1)
    v_598 = (v_593 * v_597)
    v_599 = v_598.pow(2)
    v_600 = torch.sum(input=v_599, dim=(2,3,4), keepdim=False)
    v_601 = torch.rsqrt((v_600 + 1.000000e-08))
    v_602 = v_601.view(1, 128, 1, 1, 1)
    v_603 = (v_598 * v_602)
    v_604 = v_603.view(128, 128, 3, 3)
    v_605 = F.conv2d(input=v_594, weight=v_604, bias=None, dilation=(1,1), groups=1, padding=(1,1), stride=(1,1))
    v_606 = (v_605 * 1.414214e+00)
    v_607 = v_606.new_empty(size=(1,1,256,256))
    v_608 = aten::normal_(v_607, v_590, v_589, v_588)
    v_609 = ((v_606 + (v_592 * v_608)) + v_591)
    v_610 = self.stylegan_decoder_style_convs_11_activate(v_609)
    v_611 = self.bias
    v_612 = self.weight
    v_613 = v_233.select(dim=1, index=13)
    v_614 = self.stylegan_decoder_to_rgbs_5_modulated_conv_modulation(v_613)
    v_615 = v_614.view(1, 1, 128, 1, 1)
    v_616 = (v_612 * v_615)
    v_617 = v_616.view(3, 128, 1, 1)
    v_618 = F.conv2d(input=v_610, weight=v_617, bias=None, dilation=(1,1), groups=1, padding=(0,0), stride=(1,1))
    v_619 = F.upsample(input=v_561, align_corners=False, mode='bilinear', scale_factor=(2.000000,2.000000))
    v_620 = ((v_618 + v_611) + v_619)
    v_621 = None
    v_622 = 1.000000e+00
    v_623 = 0.000000e+00
    v_624 = self.bias
    v_625 = self.weight
    v_626 = self.weight
    v_627 = v_233.select(dim=1, index=13)
    v_628 = self.stylegan_decoder_style_convs_12_modulated_conv_modulation(v_627)
    v_629 = v_628.view(1, 1, 128, 1, 1)
    v_630 = (v_626 * v_629)
    v_631 = v_630.pow(2)
    v_632 = torch.sum(input=v_631, dim=(2,3,4), keepdim=False)
    v_633 = torch.rsqrt((v_632 + 1.000000e-08))
    v_634 = v_633.view(1, 64, 1, 1, 1)
    v_635 = (v_630 * v_634)
    v_636 = F.upsample(input=v_610, align_corners=False, mode='bilinear', scale_factor=(2.000000,2.000000))
    v_637 = v_635.view(64, 128, 3, 3)
    v_638 = F.conv2d(input=v_636, weight=v_637, bias=None, dilation=(1,1), groups=1, padding=(1,1), stride=(1,1))
    v_639 = (v_638 * 1.414214e+00)
    v_640 = v_639.new_empty(size=(1,1,512,512))
    v_641 = aten::normal_(v_640, v_623, v_622, v_621)
    v_642 = ((v_639 + (v_625 * v_641)) + v_624)
    v_643 = self.stylegan_decoder_style_convs_12_activate(v_642)
    v_644, v_645 = torch.split(tensor=v_643, dim=1, split_size_or_sections=32)
    v_646 = ((v_645 * v_226) + v_231)
    v_647 = None
    v_648 = 1.000000e+00
    v_649 = 0.000000e+00
    v_650 = self.bias
    v_651 = self.weight
    v_652 = self.weight
    v_653 = torch.cat((v_644, v_646), dim=1)
    v_654 = v_233.select(dim=1, index=14)
    v_655 = self.stylegan_decoder_style_convs_13_modulated_conv_modulation(v_654)
    v_656 = v_655.view(1, 1, 64, 1, 1)
    v_657 = (v_652 * v_656)
    v_658 = v_657.pow(2)
    v_659 = torch.sum(input=v_658, dim=(2,3,4), keepdim=False)
    v_660 = torch.rsqrt((v_659 + 1.000000e-08))
    v_661 = v_660.view(1, 64, 1, 1, 1)
    v_662 = (v_657 * v_661)
    v_663 = v_662.view(64, 64, 3, 3)
    v_664 = F.conv2d(input=v_653, weight=v_663, bias=None, dilation=(1,1), groups=1, padding=(1,1), stride=(1,1))
    v_665 = (v_664 * 1.414214e+00)
    v_666 = v_665.new_empty(size=(1,1,512,512))
    v_667 = aten::normal_(v_666, v_649, v_648, v_647)
    v_668 = ((v_665 + (v_651 * v_667)) + v_650)
    v_669 = self.stylegan_decoder_style_convs_13_activate(v_668)
    v_670 = self.bias
    v_671 = self.weight
    v_672 = v_233.select(dim=1, index=15)
    v_673 = self.stylegan_decoder_to_rgbs_6_modulated_conv_modulation(v_672)
    v_674 = v_673.view(1, 1, 64, 1, 1)
    v_675 = (v_671 * v_674)
    v_676 = v_675.view(3, 64, 1, 1)
    v_677 = F.conv2d(input=v_669, weight=v_676, bias=None, dilation=(1,1), groups=1, padding=(0,0), stride=(1,1))
    v_678 = F.upsample(input=v_620, align_corners=False, mode='bilinear', scale_factor=(2.000000,2.000000))
    v_679 = ((v_677 + v_670) + v_678)
    v_680 = [v_100, v_122, v_144, v_166, v_188, v_210, v_232]
    v_681 = (v_679, v_680, )
    return v_681

def export_torchscript():
net = Model()
net.eval()

torch.manual_seed(0)
v_0 = torch.rand(1, 3, 512, 512, dtype=torch.float)

mod = torch.jit.trace(net, v_0)
mod.save("E://opt//ncnn//tools//pnnx//build_debug//install//bin//a.pnnx.py.pt")

def test_inference():
net = Model()
net.eval()

torch.manual_seed(0)
v_0 = torch.rand(1, 3, 512, 512, dtype=torch.float)

return net(v_0)

**

os: Windosw 10 VS2017 X64 Debug Libtorch 1.10+cpu; Protobuf 3.4.0
cmd: pnnx.exe a.pt inputshape=[1,3,512,512]

麻烦大佬看下,不胜感激。
@nihui

@zhu-zhaofei
Copy link

ncnn/tools/pnnx/src/pass_ncnn/chain_multi_output.cpp文件中
`// Tencent is pleased to support the open source community by making ncnn available.
//
// Copyright (C) 2021 THL A29 Limited, a Tencent company. All rights reserved.
//
// Licensed under the BSD 3-Clause License (the "License"); you may not use this file except
// in compliance with the License. You may obtain a copy of the License at
//
// https://opensource.org/licenses/BSD-3-Clause
//
// Unless required by applicable law or agreed to in writing, software distributed
// under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
// CONDITIONS OF ANY KIND, either express or implied. See the License for the
// specific language governing permissions and limitations under the License.

#include "chain_multi_output.h"

#include

namespace pnnx {

namespace ncnn {

void chain_multi_output(Graph& graph)
{
for (;;)
{
bool need_eliminate = false;

    for (int i = (int)graph.ops.size() - 1; i >= 0; i--)
    {
        Operator* op = graph.ops[i];

        if (op->type != "pnnx.Output")
            continue;

        // prim::TupleConstruct     pnnx_791                 2 1 a b out
        // pnnx.Expression          pnnx_expr_0              3 1 a b c out expr=[@0,@1,@2]
        // pnnx.Output              pnnx_output_0            1 0 out
        bool match_tuple_expr_output = false;
        for (int j = 0; j < (int)op->inputs.size(); j++)
        {
            Operand* r = op->inputs[j];

            if (r->consumers.size() != 1)
                continue;

            Operator* op0 = r->producer;

            if (op0->type == "prim::TupleConstruct")
            {
                match_tuple_expr_output = true;
            }
            else if (op0->type == "pnnx.Expression")
            {
                const int op_expr_input_count = (int)op0->inputs.size();
                const std::string& expr = op0->params.at("expr").s;

                std::string pattern_expr = "[";
                for (int k = 0; k < op_expr_input_count; k++)
                {
                    pattern_expr += std::string("@") + std::to_string(k);

                    if (k != op_expr_input_count - 1)
                        pattern_expr += ",";
                }
                pattern_expr += "]";

                if (expr == pattern_expr)
                {
                    match_tuple_expr_output = true;
                }
            }

            if (!match_tuple_expr_output)
                continue;

            // chain op0 as output and delete op0
            std::vector<Operand*> new_inputs;
            for (int k = 0; k < j; k++)
            {
                new_inputs.push_back(op->inputs[k]);
            }

            for (Operand* r : op0->inputs)
            {
                r->remove_consumer(op0);
                r->consumers.push_back(op);
                new_inputs.push_back(r);
            }

            for (int k = j + 1; k < (int)op->inputs.size(); k++)
            {
                new_inputs.push_back(op->inputs[k]);
            }

            op->inputs = new_inputs;

            op0->inputs.clear();
            op0->outputs.clear();

            Operand* op0_out = op0->outputs[0];
            op0_out->producer = 0;
            op0_out->consumers.clear();

            graph.operands.erase(std::find(graph.operands.begin(), graph.operands.end(), op0_out));
            delete op0_out;

            graph.ops.erase(std::find(graph.ops.begin(), graph.ops.end(), op0));
            delete op0;

            break;
        }

        if (match_tuple_expr_output)
            need_eliminate = true;

        break;
    }

    if (!need_eliminate)
        break;
}

}

} // namespace ncnn

} // namespace pnnx`
第99行 op0->outputs.clear();
而第101行 Operand* op0_out = op0->outputs[0];
这样op0_out指针就越界了? 这段代码需要修改下

@zhang0557kui
Copy link

layer F.grid_sample not exists or registered

@csukuangfj
Copy link
Contributor

csukuangfj commented Aug 6, 2022

https://github.com/nihui/ncnn/blob/a8992896f909d81cc1ba7d56bdc7298ebda7ecc9/tools/pnnx/README.md

这个 commit, 一下子就上传了一大堆的源代码。@nihui 可以分享 git history log 么?

@LRY89757
Copy link
Contributor

layer F.grid_sample not exists or registered

Now F.gridsample has been implemented. #4288

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants