Is the evaluate-metric/accuracy the same as macro-accuracy?

I am running tests on BERT transformers and using the evaluate Python library. On the site, it says: “computed with Accuracy = (TP + TN) / (TP + TN + FP + FN) Where: TP: True positive TN: True negative FP: False positive FN: False negative”, which seems to indicate that it is the macro-accuracy.