Setting compute_metrics in Trainer() leads to AttributeError

#38
by Eyel - opened

Using the code in the tutorial colab, setting a custom compute_metrics in the Trainer() leads to AttributeError: 'DynamicCache' object has no attribute 'detach'.

Currently on transformers Version: 4.41.0.dev0

def custom_metrics(eval_preds):
    exit(0)

trainer = Trainer(
    model=model,
    args=training_args,
    data_collator=data_collator,
    train_dataset=train_dataset,
    eval_dataset=eval_dataset, # You can also evaluate (loss) on the eval set, note that it will incur some additional GPU memory
    compute_metrics = custom_metrics
)

trainer.evaluate()

The rest of the code is the same as in the colab.

File ~/miniconda3/lib/python3.11/site-packages/transformers/trainer.py:3513, in Trainer.evaluate(self, eval_dataset, ignore_keys, metric_key_prefix)
   3510 start_time = time.time()
   3512 eval_loop = self.prediction_loop if self.args.use_legacy_prediction_loop else self.evaluation_loop
-> 3513 output = eval_loop(
   3514     eval_dataloader,
   3515     description="Evaluation",
   3516     # No point gathering the predictions if there are no metrics, otherwise we defer to
   3517     # self.args.prediction_loss_only
   3518     prediction_loss_only=True if self.compute_metrics is None else None,
   3519     ignore_keys=ignore_keys,
   3520     metric_key_prefix=metric_key_prefix,
   3521 )
   3523 total_batch_size = self.args.eval_batch_size * self.args.world_size
   3524 if f"{metric_key_prefix}_jit_compilation_time" in output.metrics:

File ~/miniconda3/lib/python3.11/site-packages/transformers/trainer.py:3696, in Trainer.evaluation_loop(self, dataloader, description, prediction_loss_only, ignore_keys, metric_key_prefix)
   3693         batch_size = observed_batch_size
   3695 # Prediction step
-> 3696 loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys)
   3697 main_input_name = getattr(self.model, "main_input_name", "input_ids")
   3698 inputs_decode = self._prepare_input(inputs[main_input_name]) if args.include_inputs_for_metrics else None

File ~/miniconda3/lib/python3.11/site-packages/transformers/trainer.py:3904, in Trainer.prediction_step(self, model, inputs, prediction_loss_only, ignore_keys)
   3901 if prediction_loss_only:
   3902     return (loss, None, None)
-> 3904 logits = nested_detach(logits)
   3905 if len(logits) == 1:
   3906     logits = logits[0]

File ~/miniconda3/lib/python3.11/site-packages/transformers/trainer_pt_utils.py:190, in nested_detach(tensors)
    188 "Detach `tensors` (even if it's a nested list/tuple/dict of tensors)."
    189 if isinstance(tensors, (list, tuple)):
--> 190     return type(tensors)(nested_detach(t) for t in tensors)
    191 elif isinstance(tensors, Mapping):
    192     return type(tensors)({k: nested_detach(t) for k, t in tensors.items()})

File ~/miniconda3/lib/python3.11/site-packages/transformers/trainer_pt_utils.py:190, in <genexpr>(.0)
    188 "Detach `tensors` (even if it's a nested list/tuple/dict of tensors)."
    189 if isinstance(tensors, (list, tuple)):
--> 190     return type(tensors)(nested_detach(t) for t in tensors)
    191 elif isinstance(tensors, Mapping):
    192     return type(tensors)({k: nested_detach(t) for k, t in tensors.items()})

File ~/miniconda3/lib/python3.11/site-packages/transformers/trainer_pt_utils.py:193, in nested_detach(tensors)
    191 elif isinstance(tensors, Mapping):
    192     return type(tensors)({k: nested_detach(t) for k, t in tensors.items()})
--> 193 return tensors.detach()

AttributeError: 'DynamicCache' object has no attribute 'detach'

hi @Eyel
that looks like an issue with HF Transformers (and more specifically the dynamic cache used in generate).
can you open an issue there?
i don't think that's specific to idefics2 and that will help for discovery

Hey @VictorSanh , thanks I looked into it and it seems to be an issue when the model's output's past_key_values is an empty DynamicCache.

I'll open an issue with HF Transformers.

thanks! for reference, if anyone encounters the same issue, Niels proposed a solution in this issue: https://github.com/huggingface/transformers/issues/30631

Sign up or log in to comment