Legal-BERT was pretrained on a large corpus of legal documents using Google's original BRET code:
The Hugging Face implementation of this model can be easily setup to predict missing words in a sequence of legal text. It also shows meaningful performance improvement discerning contracts from non-contracts (binary classification) and multi-label legal text classification (e.g. classifying legal clauses by type).
Thanks to the magic of Hugging Face, this model should be accessible even to novice coders. Further training and fine-tuning will require training data and a basic understanding of how to do this in Hugging Face, however.