Pretrain model not accepting optimizer

For this code,
model = TFAutoModelForSequenceClassification.from_pretrained(“bert-base-cased”, num_labels=3)
model.compile(
optimizer = tf.keras.optimizers.Adam(learning_rate=5e-5)
)
This gives me this error ValueError: Could not interpret optimizer identifier: <keras.src.optimizers.adam.Adam object at 0x7e0d28e55fc0>
what to do?
I am using google colab

4 Likes

I’m having the same issue, but using this code:

model = TFBertForSequenceClassification.from_pretrained(‘neuralmind/bert-base-portuguese-cased’, num_labels=2)
optimizer = tf.keras.optimizers.Adam(learning_rate=5e-5)
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
model.compile(optimizer=optimizer, loss=loss, metrics=[‘accuracy’])

Same problem. Tried switching optimizers but not working for any (SGD or Adam)

This is also a case for me when I’m using tf on 2.15, and when I change it to 2.16 versions and try to “from transformers import TFBertForSequenceClassification” I’m having the issue below:

RuntimeError: Failed to import transformers.models.bert.modeling_tf_bert because of the following error (look up to see its traceback):
module ‘tensorflow._api.v2.compat.v2.internal’ has no attribute ‘register_load_context_function’

Same error here. I am using keras 2.15 and this code is failing now (two months ago it was working fine):

import tensorflow as tf
from transformers import TFBertForSequenceClassification

hf_model_name = "dccuchile/bert-base-spanish-wwm-cased"
model = TFBertForSequenceClassification.from_pretrained(hf_model_name)

optimizer = tf.keras.optimizers.Adam(learning_rate=3e-5, epsilon=1e-08, clipnorm=1.0)
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
metric = tf.keras.metrics.SparseCategoricalAccuracy('accuracy')

model.compile(optimizer=optimizer, loss=loss, metrics=[metric])

Error:

ValueError: Could not interpret optimizer identifier: <keras.src.optimizers.adam.Adam object at 0x7e3ac5f13d60>

Hi all, Matt from Hugging Face here! The cause is that TensorFlow has switched to Keras 3 as the ‘default’ Keras as of TF 2.16, and Keras 3 is often installed alongside TF 2.15 as well. The errors in this thread are because Keras 3 objects are being passed to Keras 2 model objects and code.

The quickest solution is to pip install tf-keras and then set the environment variable TF_USE_LEGACY_KERAS=1. This will make tf.keras point to Keras 2, and your code should work as before.

We’re also pushing a fix to transformers to do this by default here. If you want to try the fixed transformers branch instead, you can install it with pip install git+https://github.com/huggingface/transformers.git@keras_3_compat_fix

This is an area of active work for us - please let us know if these approaches fixed your problem, and if you’re encountering any other issues with the Keras 3 transition!

5 Likes

Thanks for the solution, @Rocketknight1
It works correctly for me.
Looking forward to seeing this fix by default in the transformer library.
Regards

Hi Matt,

both solutions did not work for me in Google Colab.

  • Solution 1 produced the same error for me
!pip install tf-keras
import os
os.environ['TF_USE_LEGACY_KERAS'] = '1'
  • Solution 2 !pip install git+https://github.com/huggingface/transformers.git@keras_3_compat_fix resulted in the following message:
Collecting git+https://github.com/huggingface/transformers.git@keras_3_compat_fix
  Cloning https://github.com/huggingface/transformers.git (to revision keras_3_compat_fix) to /tmp/pip-req-build-u6t3ei27
  Running command git clone --filter=blob:none --quiet https://github.com/huggingface/transformers.git /tmp/pip-req-build-u6t3ei27
  WARNING: Did not find branch or tag 'keras_3_compat_fix', assuming revision or ref.
  Running command git checkout -q keras_3_compat_fix
  error: pathspec 'keras_3_compat_fix' did not match any file(s) known to git
  error: subprocess-exited-with-error
  
  × git checkout -q keras_3_compat_fix did not run successfully.
  │ exit code: 1
  ╰─> See above for output.
  
  note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error

× git checkout -q keras_3_compat_fix did not run successfully.
│ exit code: 1
╰─> See above for output.

What can I do?

try !pip install tf-keras==2.15.0 :slight_smile:

Hi @leon-hecht, the keras_3_compat_fix branch has been deleted since it has now been merged into transformers, and included in version 4.39 which released today. You can now just run pip install --upgrade transformers to get the fix instead of pip install git+https://github.com/huggingface/transformers.git@keras_3_compat_fix. Let me know if you’re still encountering problems!

1 Like

I am facing the same issue, and I followed along the entire thread and still not able to resolve it.

I just installed the latest version of the Transformers library (v4.39.0) on my Google Colab, and it worked for me. Thanks @Rocketknight1

Code:

!pip install --upgrade transformers
import transformers
print(transformers.__version__)

Hi @mahreenfatima, can you confirm that you’re using transformers 4.39, and if so, can you paste me some sample code that shows your issue?

This is the code that is raising a ValueError, I am using transformers version 4.39.0 and tensorflow 2.15.0, I tried restarting the session on google colab to see the changes and stil the same error.

opt_new = Adam(learning_rate = learning_rate_scheduler)
model = TFAutoModelForSequenceClassification.from_pretrained(checkpoint,num_labels=2)

loss = SparseCategoricalCrossentropy(from_logits = True)

model.compile(optimizer=opt_new, loss=loss, metrics=[“accuracy”])


ValueError Traceback (most recent call last)

in <cell line: 1>() ----> 1 model.compile(optimizer=opt_new, loss=loss, metrics=[“accuracy”])


2 frames

/usr/local/lib/python3.10/dist-packages/transformers/modeling_tf_utils.py in compile(self, optimizer, loss, metrics, loss_weights, weighted_metrics, run_eagerly, steps_per_execution, **kwargs) 1502 # This argument got renamed, we need to support both versions
1503 if “steps_per_execution” in parent_args:
→ 1504 super().compile(
1505 optimizer=optimizer,
1506 loss=loss,

/usr/local/lib/python3.10/dist-packages/tf_keras/src/utils/traceback_utils.py in error_handler(*args, **kwargs)
68 # To get the full stack trace, call:
69 # tf.debugging.disable_traceback_filtering()
—> 70 raise e.with_traceback(filtered_tb) from None
71 finally:
72 del filtered_tb

/usr/local/lib/python3.10/dist-packages/tf_keras/src/optimizers/init.py in get(identifier, **kwargs)
332 )
333 else:
→ 334 raise ValueError(
335 f"Could not interpret optimizer identifier: {identifier}"
336 )

ValueError: Could not interpret optimizer identifier: <keras.src.optimizers.adam.Adam object at 0x7aa3c841a830>

Also while importing,
from tensorflow.keras.losses import SparseCategoricalCrossentropy
from tensorflow.keras.optimizers.schedules import PolynomialDecay,
as I hover over the tensorflow.keras it is underlined and shows the following message
--------Import “tensorflow.keras.optimizers.schedules” could not be resolved(reportMissingImports)-------

I tried all the solutions suggested in the discussion but still facing the same issue.

I am using Colab, I did all that this thread says:

  • !pip install --upgrade transformers → Upgraded to 4.39.2 version
  • !pip install tf-keras
  • os.environ[‘TF_USE_LEGACY_KERAS’] = ‘1’ → I’ve set up the legacy version

After all, I restarted the colab session and it worked, thanks

I have also tried everything mentioned above. My transformer version is up to date and I have done what others have done but to no avail. What is the current recommendation that works?

I have the same problem and still getting same error. I tried everything, but it doesn’t work. I am working on a project and I am short on time. Please help.

I’ve also tried all the steps as mentioned above but nothing is working. Attached is the SS showing the error I’m getting

EDIT: The problem solved when I restarted the kernel. Do not refresh the google colab (after doing all the steps), rather restart it

1 Like

Hello @Rocketknight1 , I have followed all of the above recommendations:

!pip install --upgrade transformers
!pip install tf-keras
import os
os.environ['TF_USE_LEGACY_KERAS'] = '1'

AND restarted the colab session after installation, but I still get the error

ValueError: Could not interpret optimizer identifier: <keras.src.optimizers.adam.Adam object at 0x7f3e884e2b60>

I am using

checkpoint = "bert-base-uncased" 
model = TFAutoModelForSequenceClassification.from_pretrained(checkpoint, num_labels=2)

Any recommendations why this may still not be working for some?