Vision-Language Pre-Training: Basics, Recent Advances, and Future Trends

Author:   Zhe Gan ,  Linjie Li ,  Chunyuan Li ,  Lijuan Wang
Publisher:   now publishers Inc
ISBN:  

9781638281320


Pages:   204
Publication Date:   05 December 2022
Format:   Paperback
Availability:   In Print   Availability explained
This item will be ordered in for you from one of our suppliers. Upon receipt, we will promptly dispatch it out to you. For in store availability, please contact us.

Our Price $261.36 Quantity:  
Add to Cart

Share |

Vision-Language Pre-Training: Basics, Recent Advances, and Future Trends


Add your own review!

Overview

Humans perceive the world through many channels, such as images viewed by the eyes, or voices heard by the ears. Though any individual channel might be incomplete or noisy, humans can naturally align and fuse information collected from multiple channels in order to grasp the key concepts needed for a better understanding of the world. One of the core aspirations in Artificial Intelligence (AI) is to develop algorithms that endow computers with an ability to effectively learn from multimodal (or, multi-channel) data. This data is similar to sights and sounds attained from vision and language that help humans make sense of the world around us. For example, computers could mimic this ability by searching the most relevant images to a text query (or vice versa), and by describing the content of an image using natural language. Vision-and-Language (VL), a popular research area that sits at the nexus of Computer Vision and Natural Language Processing (NLP), aims to achieve this goal. This monograph surveys vision-language pre-training (VLP) methods for multimodal intelligence that have been developed in the last few years. Approaches are grouped into three categories: (i) VLP for image-text tasks, such as image captioning, image-text retrieval, visual question answering, and visual grounding; (ii) VLP for core computer vision tasks, such as (open-set) image classification, object detection, and segmentation; and (iii) VLP for video-text tasks, such as video captioning, video-text retrieval, and video question answering. For each category, a comprehensive review of state-of-the-art methods is presented, and the progress that has been made and challenges still being faced are discussed, using specific systems and models as case studies. In addition, for each category, advanced topics being actively explored in the research community are presented, such as big foundation models, unified modeling, in-context few-shot learning, knowledge, robustness, and computer vision in the wild, to name a few.

Full Product Details

Author:   Zhe Gan ,  Linjie Li ,  Chunyuan Li ,  Lijuan Wang
Publisher:   now publishers Inc
Imprint:   now publishers Inc
Weight:   0.294kg
ISBN:  

9781638281320


ISBN 10:   1638281327
Pages:   204
Publication Date:   05 December 2022
Audience:   Professional and scholarly ,  Professional & Vocational
Format:   Paperback
Publisher's Status:   Active
Availability:   In Print   Availability explained
This item will be ordered in for you from one of our suppliers. Upon receipt, we will promptly dispatch it out to you. For in store availability, please contact us.

Table of Contents

1. Introduction 2. Tasks, Benchmarks, and Early Models 3. VLP for Image-Text Tasks 4. VLP for Core Vision Tasks 5. VLP for Video-Text Tasks 6. VL Systems in Industry 7. Conclusions and Research Trends Acknowledgments References

Reviews

Author Information

Tab Content 6

Author Website:  

Customer Reviews

Recent Reviews

No review item found!

Add your own review!

Countries Available

All regions
Latest Reading Guide

MRG2025CC

 

Shopping Cart
Your cart is empty
Shopping cart
Mailing List