COLLECTION

Semantic Text Analyser BERT-like language model for formal language understanding

Acronym: SeTABERTa

Description

SeTABERTa is a new multilingual langue model pertained from scratch using various Open Access text repositories: EU legislation, research articles, EU public documents and US patents. 2/3 of training data is English. The other part of data covers EU24 languages. The model was trained on JRC Big Data Platform. The model can be fine-tuned for other tasks.

Contact

Email
vidas.daudaravicius (at) ec.europa.eu

Datasets (1)

Additional information

Published by
European Commission, Joint Research Centre
Created date
2024-01-30
Modified date
2024-01-30