Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[refactor][DRAFT] Tentative input projection (inc. self attention) rewrite #299

Closed
wants to merge 1 commit into from

Conversation

blefaudeux
Copy link
Contributor

@blefaudeux blefaudeux commented May 9, 2022

What does this PR do?

  • possibly fix xformers ViT-B ImageNet MAE + Deepnorm training instability #219 for deepnorm (pre and post norm are already ok I think). cc @jramapuram, else I'll give it a shot on IN when I get the time
  • make it explicit when we're in the self attention case or not, easier to catch errors and explicit >> implicit typically
  • rename "InProjContainer" to "InputProjection", which in the context of the MultiHeadAttention is maybe more understandable ?
  • fix the projection weight inits in the self-attention case, init the merged weight buffer in three steps, exactly the same as the non-merged case (this is what could actually improve on the deepnorm + self attention case)

This is still a draft until previous PR land and it's confirmed that this fixes something :D

TODO:

  • Improve code cov
  • Explicitly test the init functions

Before submitting

  • Did you have fun?
    • Make sure you had fun coding 🙃
  • Did you read the contributor guideline?
  • Was this discussed/approved via a Github issue? (no need for typos, doc improvements)
    • N/A
  • Did you make sure to update the docs?
    • N/A
  • Did you write any new necessary tests?
    • N/A
  • Did you update the changelog? (if needed)
    • N/A

PR review

Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label May 9, 2022
@blefaudeux blefaudeux marked this pull request as draft May 9, 2022 23:51
Base automatically changed from add_pool_op to main May 9, 2022 23:57
@blefaudeux blefaudeux force-pushed the better_input_projections branch 2 times, most recently from d572815 to 9826403 Compare May 10, 2022 00:31
@codecov-commenter
Copy link

Codecov Report

Merging #299 (9826403) into main (7705e5e) will decrease coverage by 0.29%.
The diff coverage is 100.00%.

@@            Coverage Diff             @@
##             main     #299      +/-   ##
==========================================
- Coverage   93.00%   92.71%   -0.30%     
==========================================
  Files          66       66              
  Lines        3546     3542       -4     
==========================================
- Hits         3298     3284      -14     
- Misses        248      258      +10     
Flag Coverage Δ
Python 92.71% <100.00%> (-0.30%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
xformers/components/__init__.py 100.00% <100.00%> (ø)
xformers/components/attention/compositional.py 100.00% <100.00%> (ø)
xformers/components/input_projection.py 100.00% <100.00%> (ø)
xformers/components/multi_head_dispatch.py 97.95% <100.00%> (+0.08%) ⬆️
xformers/factory/model_factory.py 88.80% <0.00%> (-7.47%) ⬇️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 7705e5e...9826403. Read the comment docs.

@blefaudeux blefaudeux force-pushed the better_input_projections branch 2 times, most recently from 4a0c3b7 to ccd880a Compare May 13, 2022 06:34
@blefaudeux
Copy link
Contributor Author

Does not require that many changes, closing this PR and putting a simpler one up. Longer term it would be beneficial for perf to have a better projection for self attention I think, not too bad to write

@blefaudeux blefaudeux closed this May 15, 2022
@blefaudeux blefaudeux deleted the better_input_projections branch May 30, 2022 21:28
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

xformers ViT-B ImageNet MAE + Deepnorm training instability
3 participants