Fix for some current issues with transformer updates
#7
by
slowlead
- opened
I had to update the setup.py to allow the current transformer and tokenizers (TBH, I was lazy and just change == to >=, full yunderstanding this may induce future issues, esp with pending deprecations). I also had to update the following:
- In typical_sampling.py: change LogitsWarper to LogitsProcessor
This LogitsProcessor is essentially the new LogitsWarper - In autoregressive.py: change class GPT2InferenceModel(GPT2PreTrainedModel): to class GPT2InferenceModel(GPT2PreTrainedModel, GenerationMixin):
This restores the .generate() function to GPT2InferenceModel
Appreciate the work that has gone into this project. I did two runs (one on standard preset and one with diffusion iterations dropped to 35) and both sounded pretty good on a 26 word sentence. Stay awesome.