论文笔记 | Relative Attributes

论文笔记 | Relative Attributes

《Relative Attributes》

Project & Demo: cc.gatech.edu/~parikh/r

Authors:

1) Devi Parikh: an Assistant Professor in the School of Interactive Computing at Georgia Tech and a Visiting Researcher at Facebook AI Research (FAIR). # cc.gatech.edu/~parikh/ 17年合作发表了很多文章,NIPS,ICCV,CVPR,etc 非常高产。

2) Kristen Grauman: a Professor in the Department of Computer Science at the University of Texas at Austin, where I lead the UT-Austin Computer Vision Group. I received my Ph.D. from MIT in the Computer Science and Artificial Intelligence Laboratory in 2006. 2017年CVPR4篇,其中Oral1篇,Spotlight2篇;ICCV也是4篇,还有1篇NIPS,真是牛逼

Problem:

当前大多数方法将属性作为二值分类问题;但是对于大多数属性来说,并不都是严格二分的,而且也不符合自然规定。 对于上图,(a)笑了; (c)没笑;(b)是笑了还是没笑? (d)自然风景;(f)人造物; (e)是自然风景多还是人造物?

Novelty:

Predict the presence of an attribute -> indicate the strength of an attribute in an image with respect to other images. 【但文章并没有做成回归问题,而是一个排序问题】

1)proposed to a method to model relative attributes via learned ranking fuctions [approach]. and then demonstrate their impact on novel forms of zero-learning and generating image desrciptions [two applications].

2)For each attribute, learn a ranking fucntions by using a quadratic loss function together with similarity constraints:

3) Zero-learning From Relationships:

(1) for any attribute, the user can select any seen category depicting a stronger/weaker presence of the attribute to which to relate the unseen category.

(2) our zero-shoting learning setting is that the supervisor may not only associate attributes with categoryes, but also express how the categories relate along any number of the attribtues.

(3) From a Bayesian perspective , our approach to setting the parameters of the unseen categories' generative models can be considered to be priors transfered from the knowledge of the models for the seen categories.

4) Describing Images in Relative Terms:

(1) The goal is to be able to relate to any new example to other images (actually two images) according to different properties.

(2) To avoid generating an overly precise description, we wish to select 2 images that they are not very similar to the test image in terms of attribute strength. Also ,these two image must not te too far from the test image.【和测试图像间隔1/8总数量】

发布于 2017-12-16