This paper investigates the dependence of existing
state-of-the-art person re-identification models on the presence
and visibility of human faces. We apply a face detection and
blurring algorithm to create anonymized versions of several
popular person re-identification datasets including Market1501,
DukeMTMC-reID, CUHK03, Viper, and Airport. Using a cross-section
of existing state-of-the-art models that range in accuracy
and computational efficiency, we evaluate the effect of this
anonymization on re-identification performance using standard
metrics. Perhaps surprisingly, the effect on mAP is very small,
and accuracy is recovered by simply training on the anonymized
versions of the data rather than the original data. These findings
are consistent across multiple models and datasets. These results
indicate that datasets can be safely anonymized by blurring faces
without significantly impacting the performance of person reidentification
systems, and may allow for the release of new richer
re-identification datasets where previously there were privacy or
data protection concerns.