Current state‑of‑the‑art neural machine translation (NMT) architectures usually do not
take document‑level context into account. However, the document‑level context of a source sen‑
tence to be translated could encode valuable information to guide the MT model to generate a better
translation. In recent times, MT researchers have turned their focus to this line of MT research. As an
example, hierarchical attention network (HAN) models use document‑level context for translation
prediction. In this work, we studied translations produced by the HAN‑based MT systems. We ex‑
amined how contextual information improves translation in document‑level NMT. More specifically,
we investigated why context‑aware models such as HAN perform better than vanilla baseline NMT
systems that do not take context into account. We considered Hindi‑to‑English, Spanish‑to‑English
and Chinese‑to‑English for our investigation. We experimented with the formation of conditional
context (i.e., neighbouring sentences) of the source sentences to be translated in HAN to predict their
target translations. Interestingly, we observed that the quality of the target translations of specific
source sentences highly relates to the context in which the source sentences appear. Based on their
sensitivity to context, we classify our test set sentences into three categories, i.e., context‑sensitive,
context‑insensitive and normal. We believe that this categorization may change the way in which con‑
text is utilized in document‑level translation.