5.4. EXPERIMENT 55
5.4 EXPERIMENT
To validate the effectiveness of the proposed model, we conducted extensive experiments on the
real-world Dataset I by answering the following questions.
• Does our PAICM outperform the state-of-the-art methods?
• What is the effect of NMF in the prototype-guided attribute manipulation?
• How does the proposed PAICM perform in the complementary fashion item retrieval?
We first give the experiment settings and then present experiment results with detailed
analyses on each above research question.
5.4.1 EXPERIMENT SETTINGS
Auxiliary Dataset. To evaluate our PAICM, apart from the Dataset I, we utilized an auxiliary
benchmark dataset of DeepFashion [81] to train the attribute classification networks and ob-
tain the semantic attribute representations of fashion items. is benchmark comprises 33,881
fashion items, each of which is labeled by 18 attributes with 303 attribute elements. Table 5.1
shows several attribute examples and the corresponding attribute elements. Due to the uneven
distribution of the data, we implemented the data augmentation for certain attribute classes with
limited samples by multiple operations (e.g., copy, rotation and shift) with an integrated tool of
Keras.
Table 5.1: Examples of attributes and the corresponding attribute elements
Attribute Attribute Element
Type of trousers Harem pants, straight pants
Length of trousers ree-quarter pants, pirate shorts
Type of clothes buttons Single- breasted, one button
Fitness of clothes Rectangle-shaped, hourglass-shaped
Length of dresses Below knee, above knee
Type of dresses A-line dress, pouf dress
Style of clothes Forest living style, boyfriend-style
Texture of clothes Contrast color, hollow
Attribute Representation Learning. Regarding the semantic attribute representation
learning, we adopted the architecture similar to AlexNet [62] that consists of five convolutional
layers followed by three fully connected layers. We randomly divided the auxiliary dataset into
two chunks: training set (80%) and testing set (20%), and chose the widely used cross-entropy