This benchmark operates either in full mode for registered users (U) or in a restricted anonymous mode (no custom mosaics creation, no database entry for test results or algorithm data). For registration go to the registration page. Single hot spots are explained using the contextual help in the status bar.
Welcome to the Prague texture segmentation benchmark, whose purpose is
- to mutually compare and rank different (dynamic/static) texture segmenters (supervised or unsupervised),
- to support new segmentation and classification methods development.
This server allows
- to obtain customized experimental texture mosaics and their corresponding ground truth (U),
- to obtain the benchmark texture mosaic set with their corresponding ground truth,
- to evaluate your working segmentation results and compare them with the state of art algorithms (U),
- to include your algorithm (reference, abstract, benchmark results) into the benchmark database (U),
- to check single mosaics evaluation details (criteria values and resulted thematic maps),
- to rank segmentation algorithms according to the most common benchmark criteria,
- to obtain LaTeX coded resulting criteria tables or export data in MATLAB format (U),
- to select user-defined subset of criteria (U).
- Computer generated texture mosaics and benchmarks are composed from the following image types:
- All generated texture mosaics can be corrupted with additive Gaussian noise, Poisson or salt&pepper noise.
- The corresponding trainee sets (hold out) are supplied in the classification (supervised) mode.
- Submitted results are stored in the server database and used for the algorithm ranking based on a selected criterion from the following criteria set:
- average RANK (over displayed criteria),
- region-based (including the sensitivity graphs):
- CS - correct segmentation,
- OS - over-segmentation,
- US - under-segmentation,
- ME - missed error,
- NE - noise error,
- O - omission error,
- C - commission error,
- CA - class accuracy,
- CO - recall = correct assignment,
- CC - precision = object accuracy,
- I. - type I error,
- II. - type II error,
- EA - mean class accuracy estimate,
- MS - mapping score,
- RM - root mean square proportion estimation error,
- CI - comparison index,
- F-measure (weighted harmonic mean of precision and recall) graph,
- consistency measures:
- GCE - global consistency error,
- LCE - local consistency error,
- dM - Mirkin metric,
- dD - Van Dongen metric,
- dVI - variation of information.
- Result values for dynamic benchmark are averages of criteria values over all video frames.