Automatic evaluation of machine translation (MT) is based on the idea that the quality of the MT output is better if it is more similar to human translation (HT). Whereas automatic metrics based on this similarity idea enable fast and large-scale evaluation of MT progress and therefore are widely used, they have certain limitations. One is the fact that the automatic metrics are not able to recognise acceptable differences between MT and HT. The frequent cause of these differences are translation shifts, the optional departures from theoretical formal correspondence between source and target language units for the sake of adapting the text to the norms and conventions of the target language. This work is based on the author’s own translation experience related to the evaluation of MT output compared to the experience unrelated to MT. The main observation is that, although without any instructions in this direction, fewer translation shifts were performed than when translating for other purposes. This finding will hopefully initialise further systematic research both from the aspect of MT as well as from the aspect of translation studies (TS) and bring translation theory and MT closer together.