Data Selection in Papers

Just thought about this when i was writting my own paper
someone check the correctness for me pls 🙂
there is a funny paper in NIPS talking about the same thing
but in a more formal way
 
Assume you have a method A and want to compare with a previous method B.
Method A and method B are independent.
The accuracies of these two methods are p_A and p_B respectively.
Given a sample, the probability that method A solves it correctly
but method B fails is p_A*(1-p_B).
So the average number N of samples you need to try before find a good one
to show your method A is better than method B is 1/[p_A*(1-p_B)].
Suppose your method A is actually sucks, p_A = 0.5, random guess;
and method B is ok, p_B = 0.75,
then N = 8, not a big number at all.
 
In other words, if in your paper, you select k examples from a set of
K samples to show that your method A is better than a previous method B,
whose accuracy is p_B.
Then we can only say that the expectation of p_A has a lower bound of k/[K(1-p_B)].
For example, you say your method A is applied to a sequence of 1,000 frames,
and a figure at the end of your paper gives 10 examples to show method
A is superior to method B, whose reported accuracy is 0.75, then the lower
bound of p_A’s expectation is 0.04. Not useful at all.
But this is actually happening.
Usually the authors do not want to give out the number K,
but sometimes K just can not be hidden.
 
This shows the possibility of data selection in papers,
but also give us some guide line for paper reading.
1) If a paper only shows results on its own data,
and does not give the number K, the real p_A can be any values.
2) If a paper says the experiments are done on a set of K samples,
but no statistic results are give, except for some examples of perfect
results by method A but failed by a previous mehtod B,
you can evaluate the lower bound of p_A’s expection based on p_B.
It is possible that p_B is also exaggerated, but still it is a lower bound.
Unfortunately, It is not difficult to find instances of the above two cases
in the top vision conferences, like CVPR, ICCV, or ECCV.
 
A convincing paper should include some quantitative experimental
results on some standard public data set.
This is not sufficient, but necessary.
Advertisements
此条目发表在未分类分类目录。将固定链接加入收藏夹。

5 Responses to Data Selection in Papers

  1. Li说道:

    make sense! good to know another way to cheat

  2. Sticky说道:

    u r right… 🙂
    there have been times when i had a good algorithm i looked for public data to show it is actually good but did not find any available except for a few clips which others already had perfect results on (maybe i didn\’t search carefully enough); and also times when i had a not-so-good algorithm and i selected a few clips myself to show it\’s worth something…
    the worst thing is, i get rejected in the former case and accepted in the latter. suddenly it seems all i should do is to get a plausible idea look good.
    but i believe this happens only when public data set is very limited. and it will change.
    taking the evaluations is an important step. 🙂

  3. 说道:

    yes, actually it is good to work on a problem where there is no public test set available
    then you can make the first one
    but still sometimes they just ignore you

  4. 说道:

    and we call this papers Boosting 🙂

  5. Pei说道:

    however, enforcing a standard dataset will only lead to sytem-wide overfitting of the entire research community, which is what happened for face detection, and what is going on for stereo. basically, research sucks. that\’s why i only spend my serious brain cells on vacations nowadays.

发表评论

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / 更改 )

Twitter picture

You are commenting using your Twitter account. Log Out / 更改 )

Facebook photo

You are commenting using your Facebook account. Log Out / 更改 )

Google+ photo

You are commenting using your Google+ account. Log Out / 更改 )

Connecting to %s