Grin logo
de en es fr
Shop
GRIN Website
Publish your texts - enjoy our full service for authors
Go to shop › Computer Sciences - Computational linguistics

Multi-Modal Machine Learning. An Introduction to BERT Pre-Trained Visio-Linguistic Models

Title: Multi-Modal Machine Learning. An Introduction to BERT Pre-Trained Visio-Linguistic Models

Seminar Paper , 2021 , 22 Pages , Grade: 1,3

Autor:in: Johanna Garthe (Author)

Computer Sciences - Computational linguistics
Excerpt & Details   Look inside the ebook
Summary Excerpt Details

In the field of multi-modal machine learning, where the fusion of various sensory inputs shapes learning paradigms, this paper provides an introduction to BERT-based pre-trained visio-linguistic models by specifically summarizing and analyzing two approaches: ViLBERT and VL-BERT, aiming to highlight and discuss their distinctive characteristics. The paper is structured into five chapters as follows. Chapter 2 lays the fundamental principles by introducing the characteristics of the Transformer encoder and BERT. Chapter 3 presents the selected visual-linguistic models, ViLBERT and VL-BERT. The objective of chapter 4 is to summarize and discuss both models. The paper concludes with an outlook in chapter 5.

Transfer learning is a powerful technique in the field of deep learning. At first, a model is pre-trained on a specific task. Then fine-tuning is performed by taking the trained network as the basis of a new purpose-specific model to apply it on a separate task. In this way, transfer learning helps to reduce the need to develop new models for new tasks from scratch and hence saves time for training and verification. Nowadays, there are different such pre-trained models in computer vision, natural language processing (NLP) and recently for visio-linguistic tasks. The pre-trained models presented later in this paper are both based on and use BERT. BERT, which stands for Bidirectional Encoder Representations from Transformers, is a popular training technique for NLP, which is based on the architecture of a Transformer.

Excerpt


Table of Contents

1 Introduction

2 Fundamental Principles

2.1 Transformer

2.2 BERT

3 Visio-Linguistic Models

3.1 ViLBERT

3.2 VL-BERT

4 Discussion

5 Outlook

Objectives & Core Topics

This seminar paper provides an introduction to BERT-based visio-linguistic models by summarizing and discussing two prominent approaches, ViLBERT and VL-BERT, to understand how they effectively learn joint representations of vision and language.

  • Theoretical foundations of the Transformer architecture and BERT.
  • Mechanisms of ViLBERT's two-stream architecture for information sharing.
  • Mechanisms of VL-BERT's single-stream architecture for multimodal integration.
  • Comparative analysis of architectural differences and performance results.
  • The potential of pre-train-then-transfer approaches in multimodal learning.

Excerpt from the Book

Multi-Head Attention Mechanism

To gain more flexibility, the previously described self-attention operation is performed several times in parallel. These parallel processes are called „heads“.

This multi-head attention mechanism enables that individual heads can focus on specific aspects of sentence meaning and to ask questions about them. Therefore, each h i is an individual linear transformation of the input representation as query Q, keys K and values V, whereby the scaled-dot attention is calculated h times in parallel with different linear projections (WiQ, WiK, WiV). The attention heads are then concatenated, projected into another representation with a matrix WO and lead to the final output.

Chapter Summary

1 Introduction: This chapter contextualizes the necessity of multimodal machine learning and outlines the paper's aim to compare two specialized BERT-based models.

2 Fundamental Principles: This chapter establishes the technical prerequisites by explaining the Transformer architecture and the pre-training methodology of BERT.

3 Visio-Linguistic Models: This chapter details the architectural designs, input representations, and pre-training tasks of the selected models, ViLBERT and VL-BERT.

4 Discussion: This chapter compares the similarities and differences between the two models and evaluates their performance across various downstream benchmarks.

5 Outlook: This chapter highlights current research challenges and future directions for multimodal learning and pre-trainable representations.

Keywords

Multi-Modal Machine Learning, BERT, Transformer, ViLBERT, VL-BERT, Vision-and-Language, Transfer Learning, Visual Grounding, Attention Mechanism, Multimodal Alignment, Downstream Tasks, Co-Attention, Neural Networks, Computer Vision, Natural Language Processing.

Frequently Asked Questions

What is the core focus of this research paper?

The paper focuses on the field of multi-modal machine learning, specifically introducing BERT-based models that combine visual and linguistic information.

What are the primary thematic fields covered?

The main themes include the Transformer architecture, the BERT training technique, and how these are adapted for visio-linguistic tasks.

What is the central objective of the paper?

The objective is to summarize, characterize, and discuss the architectural and functional differences between the ViLBERT and VL-BERT models.

Which scientific methodology does the author employ?

The work utilizes a literature-based analysis and synthesis of current research, examining architectural blueprints and comparative performance results of published models.

What topics are addressed in the main part of the paper?

The main part covers the theoretical foundations (Transformer/BERT), detailed examinations of two specific models (ViLBERT/VL-BERT), and an evaluation of their performance.

Which criteria define the specific scope of this study?

The study is defined by the focus on BERT-pre-trained models and their application to tasks like Visual Question Answering (VQA) or Visual Commonsense Reasoning (VCR).

What is the main architectural difference highlighted between ViLBERT and VL-BERT?

ViLBERT utilizes a two-stream architecture to process modalities separately before fusing them, whereas VL-BERT employs a single-stream architecture that treats visual and linguistic inputs within one Transformer flow.

How is visual information represented in these models?

Both models use pre-trained networks like Faster R-CNN to generate region-of-interest (RoI) features, which are then treated as input tokens similar to textual data.

Excerpt out of 22 pages  - scroll top

Details

Title
Multi-Modal Machine Learning. An Introduction to BERT Pre-Trained Visio-Linguistic Models
College
University of Trier  (Computerlinguistik und Digital Humanities)
Course
Mathematische Modellierung
Grade
1,3
Author
Johanna Garthe (Author)
Publication Year
2021
Pages
22
Catalog Number
V1431361
ISBN (PDF)
9783346983749
ISBN (Book)
9783346983756
Language
English
Tags
Multi-Modal Machine Learning Machine Learning NLP Natural Language Processing BERT Transformer
Product Safety
GRIN Publishing GmbH
Quote paper
Johanna Garthe (Author), 2021, Multi-Modal Machine Learning. An Introduction to BERT Pre-Trained Visio-Linguistic Models, Munich, GRIN Verlag, https://www.grin.com/document/1431361
Look inside the ebook
  • Depending on your browser, you might see this message in place of the failed image.
  • Depending on your browser, you might see this message in place of the failed image.
  • Depending on your browser, you might see this message in place of the failed image.
  • Depending on your browser, you might see this message in place of the failed image.
  • Depending on your browser, you might see this message in place of the failed image.
  • Depending on your browser, you might see this message in place of the failed image.
  • Depending on your browser, you might see this message in place of the failed image.
  • Depending on your browser, you might see this message in place of the failed image.
Excerpt from  22  pages
Grin logo
  • Grin.com
  • Shipping
  • Contact
  • Privacy
  • Terms
  • Imprint