Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is there a bug in the calculation method of head_torso_alpha in the HTB-SR v2 model? #87

Open
Mayoiuta opened this issue Jan 4, 2025 · 0 comments

Comments

@Mayoiuta
Copy link

Mayoiuta commented Jan 4, 2025

Thank you for the excellent code!
While studying the HTB-SR model, I noticed the following code in the file.
Here, the weights tensor is cloned:

head_torso_alpha = weights_256.clone()

At this point, (head_torso_alpha > weights_256) should always be False.
Therefore, does the following line of code—
head_torso_alpha[head_torso_alpha>weights_256] = weights_256[head_torso_alpha>weights_256]

—have no effect?

elif hparams['htbsr_head_weight_fuse_mode'] == 'v2':
# 用alpha-cat实现head torso的x的融合;替代了之前的直接alpha相加
head_torso_alpha = weights_256.clone()
head_torso_alpha[head_torso_alpha>weights_256] = weights_256[head_torso_alpha>weights_256]
rgb = rgb * head_torso_alpha + rgb_torso * (1-head_torso_alpha) # get person img
x = torch.cat([x * head_torso_alpha, x_torso * (1-head_torso_alpha)], dim=1)
x = self.fuse_head_torso_convs(x)
x, rgb = self.head_torso_block(x, rgb, ws, **block_kwargs)
head_occlusion = head_torso_alpha.clone()

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant