If you work in machine learning you probably know the best practices of using a validation set (also known as development set) in addition to your test set. If your memory needs to be refreshed, you can read the wikipedia article. There are also many valuable resources online.
You are writing a paper or report participating in a benchmark challenge. You submitted some runs, and got scores back on the test set. Should you also report your validation set scores in your paper? Yes, you should! The validation scores tell us which approach was considered to be the best run, before we looked at the test set. This hypothesis is then tested by looking at the test set. When other researchers read your report, they will be able to draw conclusions from it. Not reporting your validation scores can lead to unintended data dredging. When reading your paper, a researcher cannot see whether the hypothesis you had was confirmed (i.e. which run was best according to the validation scores) or that another run turned out better. Performing more runs makes it more likely to obtaining higher scores just by chance. Without reporting the validation scores, we can hence imply that there is a pattern in the runs when there might be no underlying effect.