Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
Spaces:
AashishNKumar
/
proj11
like
0
Paused
App
Files
Files
Community
Fetching metadata from the HF Docker repository...
main
proj11
/
xora
/
models
/
transformers
79.2 kB
19 contributors
History:
10 commits
Sapir Weissbuch
Merge pull request #30 from LightricksResearch/fix-no-flash-attention
05cb3e4
unverified
about 1 year ago
__init__.py
0 Bytes
refactor
about 1 year ago
attention.py
49.8 kB
model: fix flash attention enabling - do not check device type at this point (can be CPU)
about 1 year ago
embeddings.py
4.47 kB
Lint: added ruff.
about 1 year ago
symmetric_patchifier.py
2.92 kB
Remove the word "pixart" from code.
about 1 year ago
transformer3d.py
21.9 kB
Merge pull request #30 from LightricksResearch/fix-no-flash-attention
about 1 year ago