-
Notifications
You must be signed in to change notification settings - Fork 251
Issues: keras-team/keras-hub
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Downloaded model deserialization failure
stat:awaiting response from contributor
type:Bug
Something isn't working
#2085
opened Feb 4, 2025 by
r-zip
Add a high-level text to speech task with generate() support.
#2074
opened Feb 4, 2025 by
divyashreepathihalli
Support RLHF and other instruction fine-tuning options beyond supervised fine-tuning.
#2073
opened Feb 4, 2025 by
divyashreepathihalli
Exporting Keras Llama Checkpoint to HF
Gemma
Gemma model specific issues
#2062
opened Jan 29, 2025 by
salrowili
The BytePairTokenizer class is extremely, extremely slow at tokenizing
stat:awaiting response from contributor
type:Bug
Something isn't working
#2056
opened Jan 23, 2025 by
chenying99
The attention scores are always
None
in CachedMultiHeadAttention
#2055
opened Jan 23, 2025 by
apehex
Running segformer error : ValueError: Attempt to convert a value (<generator object preprocessing at 0x7f59f0168f40>) with an unsupported type (<class 'generator'>) to a Tensor.
type:Bug
Something isn't working
#2054
opened Jan 22, 2025 by
Masumekeshavarzi
Update model implementations to use flash attention
team-created
Issues created by Keras Hub team as part of development roadmap.
#2046
opened Jan 16, 2025 by
divyashreepathihalli
Keras newbie: Why doesn't loss decrease when fine-tuning LLaMA 3.2 3B on TPU v3-8 with Keras?
#2042
opened Jan 14, 2025 by
aeltorio
CLIPTokenizer
does not work as expected
type:Bug
#2018
opened Dec 11, 2024 by
fdtomasi
Previous Next
ProTip!
Mix and match filters to narrow down what you’re looking for.