0%

Book Description

Micro-videos, a new form of user-generated content, have been spreading widely across various social platforms, such as Vine, Kuaishou, and TikTok.

Different from traditional long videos, micro-videos are usually recorded by smart mobile devices at any place within a few seconds. Due to their brevity and low bandwidth cost, micro-videos are gaining increasing user enthusiasm. The blossoming of micro-videos opens the door to the possibility of many promising applications, ranging from network content caching to online advertising. Thus, it is highly desirable to develop an effective scheme for high-order micro-video understanding.

Micro-video understanding is, however, non-trivial due to the following challenges: (1) how to represent micro-videos that only convey one or few high-level themes or concepts; (2) how to utilize the hierarchical structure of venue categories to guide micro-video analysis; (3) how to alleviate the influence of low quality caused by complex surrounding environments and camera shake; (4) how to model multimodal sequential data, i.e. textual, acoustic, visual, and social modalities to enhance micro-video understanding; and (5) how to construct large-scale benchmark datasets for analysis. These challenges have been largely unexplored to date.

In this book, we focus on addressing the challenges presented above by proposing some state-of-the-art multimodal learning theories. To demonstrate the effectiveness of these models, we apply them to three practical tasks of micro-video understanding: popularity prediction, venue category estimation, and micro-video routing. Particularly, we first build three large-scale real-world micro-video datasets for these practical tasks. We then present a multimodal transductive learning framework for micro-video popularity prediction. Furthermore, we introduce several multimodal cooperative learning approaches and a multimodal transfer learning scheme for micro-video venue category estimation. Meanwhile, we develop a multimodal sequential learning approach for micro-video recommendation. Finally, we conclude the book and figure out the future research directions in multimodal learning toward micro-video understanding.

Table of Contents

  1. Preface
  2. Acknowledgments
  3. Introduction
    1. Micro-Video Proliferation
    2. Practical Tasks
      1. Micro-Video Popularity Prediction
      2. Micro-Video Venue Categorization
      3. Micro-Video Routing
    3. Research Challenges
    4. Our Solutions
    5. Book Structure
  4. Data Collection
    1. Dataset I for Popularity Prediction
    2. Dataset II for Venue Category Estimation
    3. Dataset III for Micro-Video Routing
    4. Summary
  5. Multimodal Transductive Learning for Micro-Video Popularity Prediction
    1. Background
    2. Research Problems
    3. Feature Extraction
      1. Observations
      2. Social Modality
      3. Visual Modality
      4. Acoustic Modality
      5. Textual Modality
    4. Related Work
      1. Popularity Prediction
      2. Multi-View Learning
      3. Low-Rank Subspace Learning
    5. Notations and Preliminaries
    6. Multimodal Transductive Learning
      1. Objective Formulation
      2. Optimization
      3. Experiments and Results (1/2)
      4. Experiments and Results (2/2)
    7. Multi-Modal Transductive Low-Rank Learning
      1. Objective Formulation
      2. Optimization
      3. Experiments and Results (1/2)
      4. Experiments and Results (2/2)
    8. Summary
  6. Multimodal Cooperative Learning for Micro-Video Venue Categorization
    1. Background
    2. Research Problems
    3. Related Work
      1. Multimedia Venue Estimation
      2. Multi-Modal Multi-Task Learning
      3. Dictionary Learning
    4. Multimodal Consistent Learning
      1. Optimization
      2. Task Relatedness Estimation
      3. Complexity Analysis
      4. Experiments
    5. Multimodal Complementary Learning
      1. Multi-Modal Dictionary Learning
      2. Tree-Guided Multi-Modal Dictionary Learning
      3. Optimization
      4. Online Learning
      5. Experiments (1/2)
      6. Experiments (2/2)
    6. Multimodal Cooperative Learning
      1. Multimodal Early Fusion
      2. Cooperative Networks
      3. Attention Networks
      4. Experiments (1/2)
      5. Experiments (2/2)
    7. Summary
  7. Multimodal Transfer Learning in Micro-Video Analysis
    1. Background
    2. Research Problems
    3. Related Work
    4. External Sound Dataset
    5. Deep Multi-Modal Transfer Learning
      1. Sound Knowledge Transfer
      2. Multi-Modal Fusion
      3. Deep Network for Venue Estimation
      4. Training
    6. Experiments
      1. Experimental Settings
      2. Acoustic Representation (RQ1)
      3. Performance Comparison (RQ2)
      4. External Knowledge Effect (RQ3)
      5. Visualization
      6. Study of DARE Model (RQ4)
    7. Summary
  8. Multimodal Sequential Learning for Micro-Video Recommendation
    1. Background
    2. Research Problems
    3. Related Work
    4. Multimodal Sequential Learning
      1. The Temporal Graph-Based LSTM Layer
      2. The Multi-Level Interest Modeling Layer
      3. The Prediction Layer
    5. Experiments
      1. Experimental Settings
      2. Baselines
      3. Overall Comparison
      4. Component-Wise Evaluation of ALPINE
      5. Justification of the Temporal Graph
      6. Attention Visualization
    6. Summary
  9. Research Frontiers
    1. Micro-Video Annotation
    2. Micro-Video Captioning
    3. Micro-Video Thumbnail Selection
    4. Semantic Ontology Construction
    5. Pornographic Content Identification
  10. Bibliography (1/4)
  11. Bibliography (2/4)
  12. Bibliography (3/4)
  13. Bibliography (4/4)
  14. Authors' Biographies
  15. Blank Page (1/3)
  16. Blank Page (2/3)
  17. Blank Page (3/3)