i tried import adam_v2, as well as just opt object but getting error
ValueError: Could not interpret optimizer identifier: <keras.src.optimizers.adam.Adam object at 0x78d78061c650>
1 Like
Apparently, there is a version incompatibility issue between Keras and TensorFlow that has been around for a long time. The solution differs for each versionâŚ
For more information, search for the version you want to useâŚ
opened 05:56AM - 07 Mar 24 UTC
closed 01:49AM - 09 May 24 UTC
type:support
stat:awaiting response from contributor
stale
```python
import tensorflow as tf
from datasets import load_dataset
from tran⌠sformers import AutoTokenizer, TFAutoModelForSequenceClassification, DataCollatorWithPadding
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.optimizers.schedules import PolynomialDecay
from tensorflow.keras.losses import SparseCategoricalCrossentropy
def prepare_imdb_dataset(tokenizer):
"""
Prepares the IMDB dataset for training and validation.
Args:
tokenizer: The tokenizer to use for text tokenization.
Returns:
A tuple containing the tokenized training and validation datasets.
"""
imdb = load_dataset("imdb")
train_set = imdb['train'].map(lambda x: tokenizer(x['text'], truncation=True), batched=True)
test_set = imdb['test'].map(lambda x: tokenizer(x['text'], truncation=True), batched=True)
return train_set, test_set
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
model = TFAutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased", num_labels=2)
train_set, test_set = prepare_imdb_dataset(tokenizer)
data_collator = DataCollatorWithPadding(tokenizer=tokenizer, return_tensors="tf")
tf_train_dataset = train_set.to_tf_dataset(
columns=["attention_mask", "input_ids"],
label_cols=["label"],
shuffle=True,
collate_fn=data_collator,
batch_size=8,
)
tf_validation_dataset = test_set.to_tf_dataset(
columns=["attention_mask", "input_ids"],
label_cols=["label"],
shuffle=False,
collate_fn=data_collator,
batch_size=8,
)
batch_size = 16
num_epochs = 1
num_train_steps = len(tf_train_dataset) * num_epochs
lr_scheduler = PolynomialDecay(
initial_learning_rate=5e-5, end_learning_rate=0.0, decay_steps=num_train_steps
)
optimizer = Adam(learning_rate=lr_scheduler)
loss = SparseCategoricalCrossentropy(from_logits=True)
model.compile(optimizer=optimizer, loss=loss, metrics=["accuracy"])
model.fit(tf_train_dataset, validation_data=tf_validation_dataset, epochs=5)
```
```
Some weights of the PyTorch model were not used when initializing the TF 2.0 model TFDistilBertForSequenceClassification: ['vocab_layer_norm.weight', 'vocab_transform.weight', 'vocab_projector.bias', 'vocab_transform.bias', 'vocab_layer_norm.bias']
- This IS expected if you are initializing TFDistilBertForSequenceClassification from a PyTorch model trained on another task or with another architecture (e.g. initializing a TFBertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing TFDistilBertForSequenceClassification from a PyTorch model that you expect to be exactly identical (e.g. initializing a TFBertForSequenceClassification model from a BertForSequenceClassification model).
Some weights or buffers of the TF 2.0 model TFDistilBertForSequenceClassification were not initialized from the PyTorch model and are newly initialized: ['pre_classifier.weight', 'pre_classifier.bias', 'classifier.weight', 'classifier.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Map:â100%
â25000/25000â[00:23<00:00,â1086.84âexamples/s]
Map:â100%
â25000/25000â[00:20<00:00,â1304.86âexamples/s]
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
[<ipython-input-17-ac80246ded67>](https://localhost:8080/#) in <cell line: 55>()
53 optimizer = Adam(learning_rate=lr_scheduler)
54 loss = SparseCategoricalCrossentropy(from_logits=True)
---> 55 model.compile(optimizer=optimizer, loss=loss, metrics=["accuracy"])
56
57 model.fit(tf_train_dataset, validation_data=tf_validation_dataset, epochs=5)
2 frames
[/usr/local/lib/python3.10/dist-packages/tf_keras/src/optimizers/__init__.py](https://localhost:8080/#) in get(identifier, **kwargs)
332 )
333 else:
--> 334 raise ValueError(
335 f"Could not interpret optimizer identifier: {identifier}"
336 )
ValueError: Could not interpret optimizer identifier: <keras.src.optimizers.adam.Adam object at 0x79d9071160e0>
```
For this code,
model = TFAutoModelForSequenceClassification.from_pretrained(âbert-base-casedâ, num_labels=3)
model.compile(
optimizer = tf.keras.optimizers.Adam(learning_rate=5e-5)
)
This gives me this error ValueError: Could not interpret optimizer identifier: <keras.src.optimizers.adam.Adam object at 0x7e0d28e55fc0>
what to do?
I am using google colab
it works for me now after
ââ"
setting these to tackle:
ValueError: Could not interpret optimizer identifier: <keras.src.optimizers.adam.Adam object at 0x7cc289675050>
ââ"
!pip install --upgrade transformers
!pip install tf-keras
import os
os.environ[âTF_USE_LEGACY_KERASâ] = â1â
1 Like
bhuvnn
April 5, 2025, 10:22am
4
ValueError Traceback (most recent call last)
in <cell line: 2>()
1 optimizer = Adam(learning_rate=2e-5)
----> 2 model.compile(loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
3 optimizer=opt,
4 metrics=[âaccuracyâ])
5 tf.keras.backend.set_value(model.optimizer.learning_rate, 2e-5)
/usr/local/lib/python3.10/dist-packages/transformers/modeling_tf_utils.py in compile(self, optimizer, loss, metrics, loss_weights, weighted_metrics, run_eagerly, steps_per_execution, **kwargs)
1561 # This argument got renamed, we need to support both versions
1562 if âsteps_per_executionâ in parent_args:
â 1563 super().compile(
1564 optimizer=optimizer,
1565 loss=loss,
/usr/local/lib/python3.10/dist-packages/tf_keras/src/utils/traceback_utils.py in error_handler(*args, **kwargs)
68 # To get the full stack trace, call:
69 # tf.debugging.disable_traceback_filtering()
â> 70 raise e.with_traceback(filtered_tb) from None
71 finally:
72 del filtered_tb
/usr/local/lib/python3.10/dist-packages/tf_keras/src/optimizers/init .py in get(identifier, **kwargs)
333 )
334 else:
â 335 raise ValueError(
336 f"Could not interpret optimizer identifier: {identifier}"
337 )
ValueError: Could not interpret optimizer identifier: <keras.src.optimizers.adam.Adam object at 0x7e17b44e89d0>
i am also facing a similiar kind of error
1 Like
It seems that there are different errors for each versionâŚ
I have the same problem and still getting same error. I tried everything, but it doesnât work. I am working on a project and I am short on time. Please help.
opened 06:00AM - 01 Mar 24 UTC
closed 01:37AM - 03 Apr 24 UTC
I followed Tutorial A3: [Text Classification with Hugging Face Transformers]. I ⌠tried to implement the 'bert-base-multilingual-uncased' model from HuggingFace. When using the code, model = t_mod.get_classifier(), it generates an error message. This code was working perfectly some days ago. However, now it produces the following error:
**Error**
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
[<ipython-input-21-9ce4d9ec2ad7>](https://localhost:8080/#) in <cell line: 1>()
----> 1 model = t_mod.get_classifier()
3 frames
[/usr/local/lib/python3.10/dist-packages/tf_keras/src/optimizers/__init__.py](https://localhost:8080/#) in get(identifier, **kwargs)
332 )
333 else:
--> 334 raise ValueError(
335 f"Could not interpret optimizer identifier: {identifier}"
336 )
ValueError: Could not interpret optimizer identifier: <keras.src.optimizers.legacy.adam.Adam object at 0x7b2c22cb13c0>
system
Closed
April 5, 2025, 11:11pm
6
This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.