Running into issues executing the basic examples given
#1
by
erickarmbrust
- opened
Attempted to run:
import torch
from transformers import pipeline
pipe = pipeline(
"text2text-generation",
model="google/t5gemma-b-b-ul2-it",
dtype=torch.bfloat16,
device_map="auto",
)
messages = [
{
"role": "user",
"content": "Tell me an unknown interesting biology fact about the brain.",
},
]
prompt = pipe.tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
pipe(prompt, max_new_tokens=32)
Which results in the following stack trace:
Traceback (most recent call last):
File "/home/armbrust/code/adulting/adulting/normalization/t5.py", line 26, in <module>
pipe(prompt, max_new_tokens=32)
File "/home/armbrust/code/adulting/.venv/lib/python3.12/site-packages/transformers/pipelines/text2text_generation.py", line 191, in __call__
result = super().__call__(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/armbrust/code/adulting/.venv/lib/python3.12/site-packages/transformers/pipelines/base.py", line 1467, in __call__
return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/armbrust/code/adulting/.venv/lib/python3.12/site-packages/transformers/pipelines/base.py", line 1474, in run_single
model_outputs = self.forward(model_inputs, **forward_params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/armbrust/code/adulting/.venv/lib/python3.12/site-packages/transformers/pipelines/base.py", line 1374, in forward
model_outputs = self._forward(model_inputs, **forward_params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/armbrust/code/adulting/.venv/lib/python3.12/site-packages/transformers/pipelines/text2text_generation.py", line 220, in _forward
output_ids = self.model.generate(**model_inputs, **generate_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/armbrust/code/adulting/.venv/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 120, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/armbrust/code/adulting/.venv/lib/python3.12/site-packages/transformers/generation/utils.py", line 2399, in generate
self._prepare_cache_for_generation(
File "/home/armbrust/code/adulting/.venv/lib/python3.12/site-packages/transformers/generation/utils.py", line 2007, in _prepare_cache_for_generation
else EncoderDecoderCache(DynamicCache(**dynamic_cache_kwargs), DynamicCache(**dynamic_cache_kwargs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/armbrust/code/adulting/.venv/lib/python3.12/site-packages/transformers/cache_utils.py", line 1018, in __init__
for _ in range(config.num_hidden_layers)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/armbrust/code/adulting/.venv/lib/python3.12/site-packages/transformers/configuration_utils.py", line 207, in __getattribute__
return super().__getattribute__(key)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'T5GemmaConfig' object has no attribute 'num_hidden_layers'
I have transformers 4.56.0 installed and introspected the config and both the encoder and decoder have num_hidden_layer attributes.
Hi @erickarmbrust , Apologies for the delayed response. I attempted to reproduce the error but could not find any issue. Could you please try again after installing the latest version of transformers? Please see this gist for your referenence. Let us know if you still face the issue. Thank you.