Unsupervised topic models do not provide a guarantee that the result will be meaningful or interpretable, and there is no objective metric to assess the result as in supervised learning. Human topic evaluation is considered the gold standard but is potentially expensive and not readily available at scale.
Two options to evaluate results more objectively include perplexity, which evaluates the model on unseen documents, and topic coherence metrics, which aim to evaluate the semantic quality of the uncovered patterns.