Description
Speaker: Marianne Stecklina
Track:PyData Language models like BERT can capture general language knowledge and transfer it to new data and tasks. However, applying a pre-trained BERT to non-English text has limitations. Is training from scratch a good (and feasible) way to overcome them?
Recorded at the PyConDE & PyData Berlin 2019 conference. https://pycon.de
More details at the conference page: https://de.pycon.org/program/YAJRGX Twitter: https://twitter.com/pydataberlin Twitter: https://twitter.com/pyconde