EvaluationThe contest criterion will be the average rank over 21 benchmark criteria, which can be categorized into four groups: region-based (5), pixel-wise (11), consistency measures (2) and clustering comparison criteria (3). The performance criteria mutually compare ground truth image regions with the corresponding machine segmented regions.
The top ten methods will be verified by organizers using submitted codes. During the contest submission period all participants will see only their results and the non-contest results in the benchmark. The detailed mutual comparison table will be publicized after the contest submission deadline.