PyTorchBenchmark pickle local object error

Hi, I’m trying to benchmark the warmup, CPU frequency, and memory capability of my code. Here is the code snippet I am using:

args = PyTorchBenchmarkArguments(models=["bert-base-uncased"], batch_sizes=[batch_size], sequence_lengths=sequence_length)
benchmark = PyTorchBenchmark(args)
benchmark.run()

print(benchmark.config_dict)

I followed the code from this documentation. However, when I ran the code, I encountered the following error:



/Users/bahk_insung/miniconda3/lib/python3.10/site-packages/transformers/benchmark/benchmark_args_utils.py:136: FutureWarning: The class <class 'transformers.benchmark.benchmark_args.PyTorchBenchmarkArguments'> is deprecated. Hugging Face Benchmarking utils are deprecated in general and it is advised to use external Benchmarking libraries  to benchmark Transformer models.
  warnings.warn(
/Users/bahk_insung/miniconda3/lib/python3.10/site-packages/transformers/benchmark/benchmark_utils.py:615: FutureWarning: The class <class 'transformers.benchmark.benchmark.PyTorchBenchmark'> is deprecated. Hugging Face Benchmarking utils are deprecated in general and it is advised to use external Benchmarking libraries  to benchmark Transformer models.
  warnings.warn(
1 / 1
Traceback (most recent call last):
  File "/Users/bahk_insung/Documents/Github/cplex_lib/nlp_partioning/main.py", line 21, in <module>
    computation_intensity = r.get_computation_intensity()
  File "/Users/bahk_insung/Documents/Github/cplex_lib/nlp_partioning/nlp_resource.py", line 86, in get_computation_intensity
    cpu_freq, latency = self.load_model()
  File "/Users/bahk_insung/Documents/Github/cplex_lib/nlp_partioning/nlp_resource.py", line 97, in load_model
    benchmark.run()
  File "/Users/bahk_insung/miniconda3/lib/python3.10/site-packages/transformers/benchmark/benchmark_utils.py", line 710, in run
    memory, inference_summary = self.inference_memory(model_name, batch_size, sequence_length)
  File "/Users/bahk_insung/miniconda3/lib/python3.10/site-packages/transformers/benchmark/benchmark_utils.py", line 679, in inference_memory
    return separate_process_wrapper_fn(self._inference_memory, self.args.do_multi_processing)(*args, **kwargs)
  File "/Users/bahk_insung/miniconda3/lib/python3.10/site-packages/transformers/benchmark/benchmark_utils.py", line 100, in multi_process_func
    p.start()
  File "/Users/bahk_insung/miniconda3/lib/python3.10/multiprocessing/process.py", line 121, in start
    self._popen = self._Popen(self)
  File "/Users/bahk_insung/miniconda3/lib/python3.10/multiprocessing/context.py", line 224, in _Popen
    return _default_context.get_context().Process._Popen(process_obj)
  File "/Users/bahk_insung/miniconda3/lib/python3.10/multiprocessing/context.py", line 288, in _Popen
    return Popen(process_obj)
  File "/Users/bahk_insung/miniconda3/lib/python3.10/multiprocessing/popen_spawn_posix.py", line 32, in __init__
    super().__init__(process_obj)
  File "/Users/bahk_insung/miniconda3/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__
    self._launch(process_obj)
  File "/Users/bahk_insung/miniconda3/lib/python3.10/multiprocessing/popen_spawn_posix.py", line 47, in _launch
    reduction.dump(process_obj, fp)
  File "/Users/bahk_insung/miniconda3/lib/python3.10/multiprocessing/reduction.py", line 60, in dump
    ForkingPickler(file, protocol).dump(obj)
AttributeError: Can't pickle local object 'separate_process_wrapper_fn.<locals>.multi_process_func.<locals>.wrapper_func'

It’s worth noting that the code is not running within a class or any other objects.