Multi-Modal Machine Learning. An Introduction to BERT Pre-Trained Visio-Linguistic Models


Seminar Paper, 2021

22 Pages, Grade: 1,3


Abstract or Introduction

In the field of multi-modal machine learning, where the fusion of various sensory inputs shapes learning paradigms, this paper provides an introduction to BERT-based pre-trained visio-linguistic models by specifically summarizing and analyzing two approaches: ViLBERT and VL-BERT, aiming to highlight and discuss their distinctive characteristics. The paper is structured into five chapters as follows. Chapter 2 lays the fundamental principles by introducing the characteristics of the Transformer encoder and BERT. Chapter 3 presents the selected visual-linguistic models, ViLBERT and VL-BERT. The objective of chapter 4 is to summarize and discuss both models. The paper concludes with an outlook in chapter 5.

Transfer learning is a powerful technique in the field of deep learning. At first, a model is pre-trained on a specific task. Then fine-tuning is performed by taking the trained network as the basis of a new purpose-specific model to apply it on a separate task. In this way, transfer learning helps to reduce the need to develop new models for new tasks from scratch and hence saves time for training and verification. Nowadays, there are different such pre-trained models in computer vision, natural language processing (NLP) and recently for visio-linguistic tasks. The pre-trained models presented later in this paper are both based on and use BERT. BERT, which stands for Bidirectional Encoder Representations from Transformers, is a popular training technique for NLP, which is based on the architecture of a Transformer.

Details

Title
Multi-Modal Machine Learning. An Introduction to BERT Pre-Trained Visio-Linguistic Models
College
University of Trier  (Computerlinguistik und Digital Humanities)
Course
Mathematische Modellierung
Grade
1,3
Author
Year
2021
Pages
22
Catalog Number
V1431361
ISBN (eBook)
9783346983749
ISBN (Book)
9783346983756
Language
English
Keywords
Multi-Modal Machine Learning, Machine Learning, NLP, Natural Language Processing, BERT, Transformer
Quote paper
Johanna Garthe (Author), 2021, Multi-Modal Machine Learning. An Introduction to BERT Pre-Trained Visio-Linguistic Models, Munich, GRIN Verlag, https://www.grin.com/document/1431361

Comments

  • No comments yet.
Look inside the ebook
Title: Multi-Modal Machine Learning. An Introduction to BERT Pre-Trained Visio-Linguistic Models



Upload papers

Your term paper / thesis:

- Publication as eBook and book
- High royalties for the sales
- Completely free - with ISBN
- It only takes five minutes
- Every paper finds readers

Publish now - it's free