3.4. RELATED WORK 27
presented a novel low-rank matrix approximation method with a structural incoherence con-
straint, which decomposes the raw data into a set of representative bases with associated sparse
error matrices. Based on the principle of self-representation, Liu et al. [95] proposed the low-
rank representation (LRR) method to search for the lowest-rank representation among all the
candidates. To overcome the incompetence of LRR in handling unobserved, insufficient, and
extremely noisy data, Liu and Yan [96] further developed an advanced version of LRR, called
latent low-rank representation (LatLRR), for subspace segmentation. Zhang et al. [196] pro-
posed a structured low-rank representation method for image classification, which constructs
a semantic-structured and constructive dictionary by incorporating class label information into
the training stage. Zhou et al. [205] provided a novel supervised and low-rank-based discrim-
inative feature learning method that integrates LatLRR with ridge regression to minimize the
classification error directly.
To handle data that are generated from multiple views in many real-world applications,
some multi-view low-rank subspace learning methods have been developed to search for a la-
tent low-dimensional common subspace such that it can capture the commonality among all the
views. For example, Xia et al. [176] proposed to construct a transition probability matrix from
each view and then recover a shared low-rank transition probability matrix via low-rank and
sparse decomposition. Liu et al. [101] presented a novel low-rank multi-view matrix completion
(lrMMC) method for multi-label image classification, where a set of basic matrices are learned
by minimizing the reconstruction errors and the rank of the latent common representation. In
the case that the view information of the testing data is unknown, Ding and Fu [39] proposed
a novel low-rank common subspace (LRCS) algorithm in a weakly supervised setting, where
only the view information is employed in the training phase. In [41], a dual low-rank decompo-
sition model was developed to learn a low-dimensional view-invariant subspace. To guide the
decomposition process, two supervised graph regularizers were considered to separate the class
structure and view structure. Li et al. [86] proposed a novel approach, named low-rank discrim-
inant embedding (LRDE), by considering the correlations between views and the geometric
structures contained within each view simultaneously. ese multi-view low-rank learning ap-
proaches have been proven to be effective when different feature views are complementary to
each other.
Although low-rank representation enables an effective learning mechanism in exploring
the low-rank structure in noisy datasets [177], only a limited amount of low-rank models have
been developed to address the popularity prediction in social networks. e prediction of video
popularity can be considered as a standard regression problem. To the best of our knowledge, one
of the most related work to our approach is introduced in [202], in which a multi-view low-rank
regression model is presented by imposing low-rank constraints on the multi-view regression
model. However, in that work, the structure and relations among different views were ignored.
To overcome this drawback, we propose to learn a set of view-specific projections by maximizing
the total correlations among views to map multi-view features into a common space. Another