This paper addresses the problem of predicting how adequate a machine translation is for gisting purposes. It focuses on the contribution of lexicalised features based on different types of topic models, as we believe these features are more robust than those used in previous work, which depend on linguistic processors that are often unreliable on automatic translations. Experiments with a number of datasets show promising results: the use of topic models outperforms the state-of-the-art approaches by a large margin in all datasets annotated for adequacy.