light-weight-face-anti-spoofing
towards the solving spoofing problem
view repo
Distance metric learning (DML) is to learn the embeddings where examples from the same class are closer than examples from different classes. It can be cast as an optimization problem with triplet constraints. Due to the vast number of triplet constraints, a sampling strategy is essential for DML. With the tremendous success of deep learning in classifications, it has been applied for DML. When learning embeddings with deep neural networks (DNNs), only a mini-batch of data is available at each iteration. The set of triplet constraints has to be sampled within the mini-batch. Since a mini-batch cannot capture the neighbors in the original set well, it makes the learned embeddings sub-optimal. On the contrary, optimizing SoftMax loss, which is a classification loss, with DNN shows a superior performance in certain DML tasks. It inspires us to investigate the formulation of SoftMax. Our analysis shows that SoftMax loss is equivalent to a smoothed triplet loss where each class has a single center. In real-world data, one class can contain several local clusters rather than a single one, e.g., birds of different poses. Therefore, we propose the SoftTriple loss to extend the SoftMax loss with multiple centers for each class. Compared with conventional deep metric learning algorithms, optimizing SoftTriple loss can learn the embeddings without the sampling phase by mildly increasing the size of the last fully connected layer. Experiments on the benchmark fine-grained data sets demonstrate the effectiveness of the proposed loss function.
READ FULL TEXT VIEW PDF
Most existing 3D object recognition algorithms focus on leveraging the s...
read it
Recent studies have highlighted that deep neural networks (DNNs) are
vul...
read it
We propose a method that substantially improves the efficiency of deep
d...
read it
Selecting the most appropriate data examples to present a deep neural ne...
read it
Variants of Triplet networks are robust entities for learning a
discrimi...
read it
With the tremendous success of deep learning in visual tasks, the
repres...
read it
Distance metric learning (DML) has been studied extensively in the past
...
read it
towards the solving spoofing problem
Minimalistic TensorFlow2+ deep metric/similarity learning library with loss functions, miners, and utils as embedding projector.
Dogs classification with Deep Metric Learning
Official PyTorch implementation of "Proxy Synthesis: Learning with Synthetic Classes for Deep Metric Learning" (AAAI 2021)
SoftTriple (ICCV2019) in pytorch
Distance metric learning (DML) has been extensively studied in the past decades due to its broad range of applications, e.g., -nearest neighbor classification [29]
[24] and clustering [31].With an appropriate distance metric, examples from the same class should be closer than examples from different classes. Many algorithms have been proposed to learn a good distance metric [15, 16, 21, 29].
In most of conventional DML methods, examples are represented by hand-crafted features, and DML is to learn a feature mapping to project examples from the original feature space to a new space. The distance can be computed as the Mahalanobis distance [11]
where is the learned distance metric. With this formulation, the main challenge of DML is from the dimensionality of input space. As a metric, the learned matrix has to be positive semi-definite (PSD) while the cost of keeping the matrix PSD can be up to , where is the dimensionality of original features. The early work directly applies PCA to shrink the original space [29]. Later, various strategies are developed to reduce the computational cost [16, 17].
Those approaches can obtain the good metric from the input features, but the hand-crafted features are task independent and may cause the loss of information, which limits the performance of DML. With the success of deep neural networks in classification [7], researchers consider to learn the embeddings directly from deep neural networks [15, 21]
. Without the explicit feature extraction, deep metric learning boosts the performance by a large margin
[21]. In deep metric learning, the dimensionality of input features is no longer a challenge since neural networks can learn low-dimensional features directly from raw materials, e.g., images, documents, etc. In contrast, generating appropriate constraints for optimization becomes challenging for deep metric learning.It is because most of deep neural networks are trained with the stochastic gradient descent (SGD) algorithm and only a mini-batch of examples are available at each iteration. Since embeddings are optimized with the loss defined on an anchor example and its neighbors (e.g., the active set of pairwise
[31] or triplet [29] constraints), the examples in a mini-batch may not be able to capture the overall neighborhood well, especially for relatively large data sets. Moreover, a mini-batch contains pairs and triplets, where is the size of the mini-batch. An effective sampling strategy over the mini-batch is essential even for a small batch (e.g., ) to learn the embeddings efficiently. Many efforts have been devoted to studying sampling an informative mini-batch [19, 21] and sampling triplets within a mini-batch [12, 24]. Some work also tried to reduce the total number of triplets with proxies [14, 18]. The sampling phase for the mini-batch and constraints not only loses the information but also makes the optimization complicated. In this work, we consider to learn embeddings without constraints sampling.Recently, researches have shown that embeddings obtained directly from optimizing SoftMax loss, which is proposed for classification, perform well on the simple distance based tasks [22, 30]
and face recognition
[2, 9, 10, 27, 28]. It inspires us to investigate the formulation of SoftMax loss. Our Analysis demonstrates that SoftMax loss is equivalent to a smoothed triplet loss. By providing a single center for each class in the last fully connected layer, the triplet constraint derived by SoftMax loss can be defined on an original example, its corresponding center and a center from a different class. Therefore, embeddings obtained by optimizing SoftMax loss can work well as a distance metric. However, a class in real-world data can consist of multiple local clusters as illustrated in Fig. 1 and a single center is insufficient to capture the inherent structure of the data. Consequently, embeddings learned from SoftMax loss can fail in the complex scenario [22].In this work, we propose to improve SoftMax loss by introducing multiple centers for each class and the novel loss is denoted as SoftTriple loss. Compared with a single center, multiple ones can capture the hidden distribution of the data better due to the fact that they help to reduce the intra-class variance. This property is also crucial to reserve the triplet constraints over original examples while training with multiple centers. Compared with existing deep DML methods, the number of triplets in SoftTriple is linear in the number of original examples. Since the centers are encoded in the last fully connected layer, SoftTriple loss can be optimized without sampling triplets. Fig. 1 illustrates the proposed SoftTriple loss. Apparently, SoftTriple loss has to determine the number of centers for each class. To alleviate this issue, we develop a strategy that sets a sufficiently large number of centers for each class at the beginning and then applies norm to obtain a compact set of centers. We demonstrate the proposed loss on the fine-grained visual categorization tasks, where capturing local clusters is essential for good performance [17].
The rest of this paper is organized as follows. Section 2 reviews the related work of conventional distance metric learning and deep metric learning. Section 3 analyzes the SoftMax loss and proposes the SoftTriple loss accordingly. Section 4 conducts comparisons on benchmark data sets. Finally, Section 5 concludes this work and discusses future directions.
Many DML methods have been developed when input features are provided [29, 31]. The dimensionality of input features is a critical challenge for those methods due to the PSD projection, and many strategies have been proposed to alleviate it. The most straightforward way is to reduce the dimension of input space by PCA [29]. However, PCA is task independent and may hurt the performance of learned embeddings. Some works try to reduce the number of valid parameters with the low-rank assumption [8]. [16] decreases the computational cost by reducing the number of PSD projections. [17] proposes to learn the dual variables in the low-dimensional space introduced by random projections and then recover the metric in the original space. After addressing the challenge from the dimensionality, the hand-crafted features become the bottleneck of performance improvement.
The forms of constraints for metric learning are also developed in these methods. Early work focuses on optimizing pairwise constraints, which require the distances between examples from the same class small while those from different classes large [31]. Later, [29] develops the triplet constraints, where given an anchor example, the distance between the anchor point and a similar example should be smaller than that between the anchor point and a dissimilar example by a large margin. It is obvious that the number of pairwise constraints is while that of triplet constraints can be up to , where is the number of original examples. Compared with the pairwise constraints, triplet constraints optimize the geometry of local cluster and are more applicable for modeling intra-class variance. In this work, we will focus on the triplet constraints.
Deep metric learning aims to learn the embeddings directly from the raw materials (e.g., images) by deep neural networks [15, 21]. With the task dependent embeddings, the performance of metric learning has a dramatical improvement. However, most of deep models are trained with SGD that allows only a mini-batch of data at each iteration. Since the size of mini-batch is small, the information in it is limited compared to the original data. To alleviate this problem, algorithms have to develop an effective sampling strategy to generate the mini-batch and then sample triplet constraints from it. A straightforward way is increasing the size of mini-batch [21]. However, the large mini-batch will suffer from the GPU memory limitation and can also increase the challenge of sampling triplets. Later, [19] proposes to generate the mini-batch from neighbor classes. Besides, there are various sampling strategies for obtaining constraints [3, 12, 21, 24]. [21] proposes to sample the semi-hard negative examples. [24] adopts all negative examples within the margin for each positive pair. [12] develops distance weighted sampling that samples examples according to the distance from the anchor example. [3] selects hard triplets with a dynamic violate margin from a hierarchical class-level tree. However, all of these strategies may fail to capture the distribution of the whole data set. Moreover, they make the optimization in deep DML complicated.
Recently, some researchers consider to reduce the total number of triplets to alleviate the challenge from the large number of triplets. [14] constructs the triplet loss with one original example and two proxies. Since the number of proxies is significantly less than the number of original examples, proxies can be kept in the memory that help to avoid the sampling over different batches. However, it only provides a single proxy for each class when label information is available, which is similar to SoftMax. [18] proposes a conventional DML algorithm to construct the triplet loss only with latent examples, which assigns multiple centers for each class and further reduces the number of triplets. In this work, we propose to learn the embeddings by optimizing the proposed SoftTriple loss to eliminate the sampling phase and capture the local geometry of each class simultaneously.
In this section, we first introduce the SoftMax loss and the triplet loss and then study the relationship between them to derive the SoftTriple loss.
Denote the embedding of the -th example as and the corresponding label as
, then the conditional probability output by a deep neural network can be estimated via the SoftMax operator
where is the last fully connected layer. denotes the number of classes and is the dimension of embeddings. The corresponding SoftMax loss is
A deep model can be learned by minimizing losses over examples. This loss has been prevalently applied for classification task [7].
Given a triplet , DML aims to learn good embeddings such that examples from the same class are closer than examples from different classes, i.e.,
where and are from the same class and is from a different class. is a predefined margin. When each example has the unit length (i.e., ), the triplet constraint can be simplified as
(1) |
where we ignore the rescaling of . The corresponding triplet loss can be written as
(2) |
It is obvious from Eqn. 1 that the number of total triplets can be cubic in the number of examples, which makes sampling inevitable for most of triplet based DML algorithms.
With the unit length for both and , the normalized SoftMax loss can be written as
(3) |
where is a scaling factor.
Surprisingly, we find that minimizing the normalized SoftMax loss with the smooth term is equivalent to optimizing a smoothed triplet loss.
(4) |
where is a distribution over classes and is the simplex as . denotes the entropy of the distribution .
Proposition 1 indicates that the SoftMax loss optimizes the triplet constraints consisting of an original example and two centers, i.e., . Compared with triplet constraints in Eqn. 1, the target of SoftMax loss is
Consequently, the embeddings learned by minimizing SoftMax loss can be applicable for the distance-based tasks while it is designed for the classification task.
Without the entropy regularizer, the loss becomes
which is equivalent to
Explicitly, it punishes the triplet with the most violation and becomes zero when the nearest neighbor of is the corresponding center
. The entropy regularizer reduces the influence from outliers and makes the loss more robust.
trades between the hardness of triplets and the regularizer. Moreover, minimizing the maximal entropy can make the distribution concentrated and further push the example away from irrelevant centers, which implies a large margin property.Applying the similar analysis to the ProxyNCA loss [14]: , we have
where . Compared with the SoftMax loss, it eliminates the benchmark triplet containing only the corresponding class center, which makes the loss unbounded. Our analysis suggests that the loss can be bounded as in Eqn. 2: . Validating the bounded loss is out of the scope of this work.
Despite optimizing SoftMax loss can learn the meaningful feature embeddings, the drawback is straightforward. It assumes that there is only a single center for each class while a real-world class can contain multiple local clusters due to the large intra-class variance as in Fig. 1. The triplet constraints generated by conventional SoftMax loss is too brief to capture the complex geometry of the original data. Therefore, we introduce multiple centers for each class.
Now, we assume that each class has centers. Then, the similarity between the example and the class can be defined as
(5) |
Note that other definitions of similarity can be applicable for this scenario (e.g., ). We adopt a simple form to illustrate the influence of multiple centers.
With the definition of the similarity, the triplet constraint requires an example to be closer to its corresponding class than other classes
As we mentioned above, minimizing the entropy term can help to pull the example to the corresponding center. To break the tie explicitly, we consider to introduce a small margin as in the conventional triplet loss in Eqn. 1 and define the constraints as
By replacing the similarity in Eqn. 4, we can obtain the HardTriple loss as
(6) |
HardTriple loss improves the SoftMax loss by providing multiple centers for each class. However, it requires the max operator to obtain the nearest center in each class while this operator is not smooth and the assignment can be sensitive between multiple centers. Inspired by the SoftMax loss, we can improve the robustness by smoothing the max operator.
Consider the problem
which is equivalent to
(7) |
we add the entropy regularizer to the distribution as
With a similar analysis as in Proposition 1, has the closed-form solution as
Taking it back to the Eqn. 7, we define the relaxed similarity between the example and the class as
By applying the smoothed similarity, we define the SoftTriple loss as
(8) |
Fig. 2 illustrates the differences between the SoftMax loss and the proposed losses.
Finally, we will show that the strategy of applying centers to construct triplet constraints can recover the constraints on original triplets.
Given two examples and that are from the same class and have the same nearest center and is from a different class, if the triple constant containing centers is satisfied
and we assume , then we have
∎
Theorem 1 demonstrates that optimizing the triplets consisting of centers with a margin can reserve the large margin property on the original triplet constraints. It also implies that more centers can be helpful to reduce the intra-class variance . In the extreme case that the number of centers is equal to the number of examples, becomes zero. However, adding more centers will increase the size of the last fully connected layer and make the optimization slow and computation expensive. Besides, it may incur the overfitting problem.
Therefore, we have to choose an appropriate number of centers for each class that can have a small approximation error while keeping a compact set of centers. We will demonstrate the strategy in the next subsection.
Finding an appropriate number of centers for data is a challenging problem that also appears in unsupervised learning, e.g., clustering. The number of centers
trades between the efficiency and effectiveness. In conventional DML algorithms, equals to the number of original examples. It makes the number of total triplet constraints up to cubic of the number of original examples. In SoftMax loss, reduces the number of constraints to be linear in the number of original examples, which is efficient but can be ineffective. Without the prior knowledge about the distribution of each class, it is hard to set precisely.Different from the strategy of setting the appropriate for each class, we propose to set a sufficiently large and then encourage similar centers to merge with each other. It can keep the diversity in the generated centers while shrinking the number of unique centers.
For each center , we can generate a matrix as
If and are similar, they can be collapsed to be the same one such that , which is the norm of the -th row in the matrix . Therefore, we regularize the norm of rows in to obtain a sparse set of centers, which can be written as the norm
By accumulating norm over multiple centers, we can have the regularizer for the -th class as
Since has the unit length, the regularizer is simplified as
(9) |
With the regularizer, our final objective becomes
(10) |
where is the number of total examples.
We conduct experiments on three benchmark fine-grained visual categorization data sets: CUB-2011, Cars196 and SOP. We follow the settings in other works [3, 14] for the fair comparison. Specifically, we adopt the Inception [25]
with the batch normalization
[5]as the backbone architecture. The parameters of the backbone are initialized with the model trained on the ImageNet ILSVRC 2012 data set
[20] and then fine-tuned on the target data sets. The images are cropped toas the input of the network. During training, only random horizontal mirroring and random crop are used as the data augmentation. A single center crop is taken for test. The model is optimized by Adam with the batch size as 32 and the number of epochs as
. The initial learning rates for the backbone and centers are set to be and , respectively. Then, they are divided by at epochs. Considering that images in CUB-2011 and Cars196 are similar to those in ImageNet, we freeze BN on these two data sets and keep BN training on the rest one. Embeddings of examples and centers have the unit length in the experiments.We compare the proposed triplet loss to the normalized SoftMax loss. The SoftMax loss in Eqn. 3 is denoted as SoftMax. We refer the objective in Eqn. 10 as SoftTriple. We set and for SoftTriple. Besides, we set a small margin as to break the tie explicitly. The number of centers is set to .
We evaluate the performance of the learned embeddings from different methods on the tasks of retrieval and clustering. For retrieval task, we use the Recall@ metric as in [24]. The quality of clustering is measured by the Normalized Mutual Information () [13]. Given the clustering assignment and the ground-truth label , NMI is computed as , where measures the mutual information and denotes the entropy.
First, we compare the methods on a fine-grained birds data set CUB-2011 [26]. It consists of species of birds and images. Following the common practice, we split the data set as that the first classes are used for training and the rest are used for test. We note that different works report the results with different dimension of embeddings while the size of embeddings has a significant impact on the performance. For fair comparison, we report the results for the dimension of , which is adopted by many existing methods and the results with feature embeddings, which reports the state-of-the-art results on most of data sets.
Table 1 summarizes the results with embeddings. Note that Npairs applies the multi-scale test while all other methods take a single crop test. For SemiHard [21], we report the result recorded in [23]. First, it is surprising to observe that the performance of SoftMax surpasses that of the existing metric learning methods. It is potentially due to the fact that SoftMax loss optimizes the relations of examples as a smoothed triplet loss, which is analyzed in Proposition 1. Second, SoftTriple demonstrates the best performance among all benchmark methods. Compared to ProxyNCA, SoftTriple improves the state-of-the-art performance by on R@1. Besides, it is better than SoftMax. It verifies that SoftMax loss cannot capture the complex geometry of real-world data set with a single center for each class. When increasing the number of centers, SoftTriple can depict the inherent structure of data better. Finally, both of SoftMax and SoftTriple show the superior performance compared to existing methods. It demonstrates that meaningful embeddings can be learned without a sampling phase.
Methods | R@1 | R@2 | R@4 | R@8 | NMI |
---|---|---|---|---|---|
SemiHard [21] | 42.6 | 55.0 | 66.4 | 77.2 | 55.4 |
LiftedStruct [24] | 43.6 | 56.6 | 68.6 | 79.6 | 56.5 |
Clustering [23] | 48.2 | 61.4 | 71.8 | 81.9 | 59.2 |
Npairs [22] | 51.0 | 63.3 | 74.3 | 83.2 | 60.4 |
ProxyNCA [14] | 49.2 | 61.9 | 67.9 | 72.4 | 59.5 |
SoftMax | 57.8 | 70.0 | 80.1 | 87.9 | 65.3 |
SoftTriple | 60.1 | 71.9 | 81.2 | 88.5 | 66.2 |
Table 2 compares SoftTriple with embeddings to the methods with large embeddings. HDC [32] applies the dimension as . Margin [12] takes dimension of embeddings and uses ResNet50 [4] as the backbone. HTL [3] sets the dimension of embeddings to and reports the state-of-the-art result on the backbone of Inception. With the large number of embeddings, it is obvious that all methods outperform existing DML methods with embeddings in Table 1. It is as expected since the high dimensional space can separate examples better, which is consistent with the observation in other work [24]. Compared with other methods, the R@1 of SoftTriple improves more than over HTL that has the same backbone as SoftTriple. It also increases R@1 by about over Margin, which applies a stronger backbone than Inception. It shows that SoftTriple loss is applicable with large embeddings.
Methods | R@1 | R@2 | R@4 | R@8 | NMI |
---|---|---|---|---|---|
HDC [32] | 53.6 | 65.7 | 77.0 | 85.6 | - |
Margin [12] | 63.6 | 74.4 | 83.1 | 90.0 | 69.0 |
HTL [3] | 57.1 | 68.8 | 78.7 | 86.5 | - |
SoftMax | 64.2 | 75.6 | 84.3 | 90.2 | 68.3 |
SoftTriple | 65.4 | 76.4 | 84.5 | 90.4 | 69.3 |
To validate the effect of the proposed regularizer, we compare the number of unique centers for each class in Fig. 3. We set a larger number of centers as to make the results explicit and then run SoftTriple with and without the regularizer in Eqn. 9. Fig. 3 illustrates that the one without regularizer will hold a set of similar centers. In contrast, SoftTriple with the regularizer can shrink the size of centers significantly and make the optimization effective.
Besides, we demonstrate the R@1 of SoftTriple with varying the number of centers in Fig. 4. Red line denotes SoftTriple loss equipped with the regularizer while blue dashed line has no regularizer. We find that when increasing the number of centers from to , the performance of SoftTriple is improved significantly, which confirms that with leveraging multiple centers, the learned embeddings can capture the data distribution better. If adding more centers, the performance of SoftTriple almost remains the same and it shows that the regularizer can help to learn the compact set of centers and will not be influenced by the initial number of centers. On the contrary, without the regularizer, the blue dashed line illustrates that the performance will degrade due to overfitting when the number of centers are over-parameterized.
Finally, we illustrate the examples of retrieved images in Fig. 5. The first column indicates the query image. The columns - show the most similar images retrieved according to the embeddings learned by SoftMax. The last four columns are the similar images returned by using the embeddings from SoftTriple. Evidently, embeddings from SoftMax can obtain the meaningful neighbors while the objective is for classification. Besides, SoftTriple improves the performance and can eliminate the images from different classes among the top of retrieved images, which are highlighted with red bounding boxes in SoftMax.
Then, we conduct the experiments on Cars196 data set [6], which contains models of cars and images. We use the first classes for training and the rest for test. Table 3 summaries the performance with embeddings. The observation is similar as for CUB-2011. SoftMax shows the superior performance and is better than ProxyNCA on R@1. Additionally, SoftTriple can further improve the performance by about , which demonstrates the effectiveness of the proposed loss function.
Methods | R@1 | R@2 | R@4 | R@8 | NMI |
---|---|---|---|---|---|
SemiHard [21] | 51.5 | 63.8 | 73.5 | 82.4 | 53.4 |
LiftedStruct [24] | 53.0 | 65.7 | 76.0 | 84.3 | 56.9 |
Clustering [23] | 58.1 | 70.6 | 80.3 | 87.8 | 59.0 |
Npairs [22] | 71.1 | 79.7 | 86.5 | 91.6 | 64.0 |
ProxyNCA [14] | 73.2 | 82.4 | 86.4 | 88.7 | 64.9 |
SoftMax | 76.8 | 85.6 | 91.3 | 95.2 | 66.7 |
SoftTriple | 78.6 | 86.6 | 91.8 | 95.4 | 67.0 |
In Table 4, we present the comparison with large dimension of embeddings. The number of embeddings for all methods in the comparison is the same as described in the experiments on CUB-2011. On this data set, HTL [3] reports the state-of-the-art result while SoftTriple outperforms it and increases R@1 by .
Finally, we evaluate the performance of different methods on the Stanford Online Products (SOP) data set [24]. It contains product images downloaded from eBay.com and includes classes. We adopt the standard splitting, where classes are used for training and the rest for test. Note that each class has about images, so we set for this data set and discard the regularizer. We also increase the initial learning rate for centers from to .
We first report the results with embeddings in Table 5. In this comparison, SoftMax is better than ProxyNCA on R@1. By simply increasing the number of centers from to , we observe that SoftTriple gains another on R@1. It confirms that multiple centers can help to capture the data structure better.
Table 6 states the performance with large embeddings. We can get a similar conclusion as in Table 5. Both SoftMax and SoftTriple outperform the state-of-the-art methods. SoftTriple improves the state-of-the-art by more than on R@1. It demonstrates the advantage of learning embeddings without sampling triplet constraints.
Sampling triplets from a mini-batch of data can degrade the performance of deep metric learning due to its poor coverage over the whole data set. To address the problem, we propose the novel SoftTriple loss to learn the embeddings without sampling. By representing each class with multiple centers, the loss can be optimized with triplets defined with the similarities between the original examples and classes. Since centers are encoded in the last fully connected layer, we can learn embeddings with the standard SGD training pipeline for classification and eliminate the sampling phase. The consistent improvement from SoftTriple over fine-grained benchmark data sets confirms the effectiveness of the proposed loss function. Since SoftMax loss is prevalently applied for classification, SoftTriple loss can also be applicable for that. Evaluating SoftTriple on the classification task can be our future work.
Imagenet classification with deep convolutional neural networks.
In NIPS, pages 1106–1114, 2012.International Journal of Computer Vision (IJCV)
, 115(3):211–252, 2015.
Comments
There are no comments yet.