Benchmarking shape signatures against human perceptions of geometric similarity

D. Clark, J.R. Corney, F. Mill, H. Rea, A. Sherlock, N.K. Taylor

Research output: Contribution to journalArticle

12 Citations (Scopus)

Abstract

Manual indexing of large databases of geometric information is both costly and difficult. Because of this, research into automated retrieval and indexing schemes has focused on the development of methods for characterising 3D shapes with a relatively small number of parameters (e.g. histograms) that allow ill-defined properties such as "geometric similarity" to be computed. However although many methods of generating these so called shape signatures have been proposed, little work on assessing how closely these measures match human perceptions of geometric similarity has been reported. This paper details the results of a trial that compared the part families identified by both human subjects and three published shape signatures. To do this a similarity matrix for the Drexel benchmark datasets was created by averaging the results of twelve manual inspections. Three different shape signatures (D2 shape distribution, spherical harmonics and surface portioning spectrum) were computed for each component in the dataset, and then used as input to a competitive neural network that sorted the objects into numbers of "similar" clusters. Comparison of human and machine generated clusters (i.e. families) of similar components allows the effectiveness of the signatures at duplicating human perceptions of shapes to be quantified. The work reported makes two contributions. Firstly the results of the human perception test suggest that the Drexel dataset contains objects whose perceived similarity levels ranged across the recorded spectrum (i.e. 0.1 to 0.9); Secondly the results obtained from benchmarking the three shape signatures against human perception demonstrate a low rate of false positives for all three signatures and a false negative rate that varied almost linearly with the amount of perceived similarity. In other words the shape signatures studied were reasonably effective at matching human perception in that they returned few wrong results and excluded parts in direct proportion to the level of similarity demanded by the user.
LanguageEnglish
Pages1038-1051
Number of pages13
JournalComputer-Aided Design
Volume38
Issue number9
DOIs
Publication statusPublished - 2006

Fingerprint

Benchmarking
Inspection
Neural networks

Keywords

  • geometric similarity
  • shape perception
  • D2 shape distribution
  • spherical harmonics
  • surface partitioning spectrum
  • artificial neural networks

Cite this

Clark, D. ; Corney, J.R. ; Mill, F. ; Rea, H. ; Sherlock, A. ; Taylor, N.K. / Benchmarking shape signatures against human perceptions of geometric similarity. In: Computer-Aided Design. 2006 ; Vol. 38, No. 9. pp. 1038-1051.
@article{8ac8c58efcd34acea0555fe192a83fcd,
title = "Benchmarking shape signatures against human perceptions of geometric similarity",
abstract = "Manual indexing of large databases of geometric information is both costly and difficult. Because of this, research into automated retrieval and indexing schemes has focused on the development of methods for characterising 3D shapes with a relatively small number of parameters (e.g. histograms) that allow ill-defined properties such as {"}geometric similarity{"} to be computed. However although many methods of generating these so called shape signatures have been proposed, little work on assessing how closely these measures match human perceptions of geometric similarity has been reported. This paper details the results of a trial that compared the part families identified by both human subjects and three published shape signatures. To do this a similarity matrix for the Drexel benchmark datasets was created by averaging the results of twelve manual inspections. Three different shape signatures (D2 shape distribution, spherical harmonics and surface portioning spectrum) were computed for each component in the dataset, and then used as input to a competitive neural network that sorted the objects into numbers of {"}similar{"} clusters. Comparison of human and machine generated clusters (i.e. families) of similar components allows the effectiveness of the signatures at duplicating human perceptions of shapes to be quantified. The work reported makes two contributions. Firstly the results of the human perception test suggest that the Drexel dataset contains objects whose perceived similarity levels ranged across the recorded spectrum (i.e. 0.1 to 0.9); Secondly the results obtained from benchmarking the three shape signatures against human perception demonstrate a low rate of false positives for all three signatures and a false negative rate that varied almost linearly with the amount of perceived similarity. In other words the shape signatures studied were reasonably effective at matching human perception in that they returned few wrong results and excluded parts in direct proportion to the level of similarity demanded by the user.",
keywords = "geometric similarity, shape perception, D2 shape distribution, spherical harmonics, surface partitioning spectrum, artificial neural networks",
author = "D. Clark and J.R. Corney and F. Mill and H. Rea and A. Sherlock and N.K. Taylor",
year = "2006",
doi = "10.1016/j.cad.2006.05.003",
language = "English",
volume = "38",
pages = "1038--1051",
journal = "Computer-Aided Design",
issn = "0010-4485",
number = "9",

}

Benchmarking shape signatures against human perceptions of geometric similarity. / Clark, D.; Corney, J.R.; Mill, F.; Rea, H.; Sherlock, A.; Taylor, N.K.

In: Computer-Aided Design, Vol. 38, No. 9, 2006, p. 1038-1051.

Research output: Contribution to journalArticle

TY - JOUR

T1 - Benchmarking shape signatures against human perceptions of geometric similarity

AU - Clark, D.

AU - Corney, J.R.

AU - Mill, F.

AU - Rea, H.

AU - Sherlock, A.

AU - Taylor, N.K.

PY - 2006

Y1 - 2006

N2 - Manual indexing of large databases of geometric information is both costly and difficult. Because of this, research into automated retrieval and indexing schemes has focused on the development of methods for characterising 3D shapes with a relatively small number of parameters (e.g. histograms) that allow ill-defined properties such as "geometric similarity" to be computed. However although many methods of generating these so called shape signatures have been proposed, little work on assessing how closely these measures match human perceptions of geometric similarity has been reported. This paper details the results of a trial that compared the part families identified by both human subjects and three published shape signatures. To do this a similarity matrix for the Drexel benchmark datasets was created by averaging the results of twelve manual inspections. Three different shape signatures (D2 shape distribution, spherical harmonics and surface portioning spectrum) were computed for each component in the dataset, and then used as input to a competitive neural network that sorted the objects into numbers of "similar" clusters. Comparison of human and machine generated clusters (i.e. families) of similar components allows the effectiveness of the signatures at duplicating human perceptions of shapes to be quantified. The work reported makes two contributions. Firstly the results of the human perception test suggest that the Drexel dataset contains objects whose perceived similarity levels ranged across the recorded spectrum (i.e. 0.1 to 0.9); Secondly the results obtained from benchmarking the three shape signatures against human perception demonstrate a low rate of false positives for all three signatures and a false negative rate that varied almost linearly with the amount of perceived similarity. In other words the shape signatures studied were reasonably effective at matching human perception in that they returned few wrong results and excluded parts in direct proportion to the level of similarity demanded by the user.

AB - Manual indexing of large databases of geometric information is both costly and difficult. Because of this, research into automated retrieval and indexing schemes has focused on the development of methods for characterising 3D shapes with a relatively small number of parameters (e.g. histograms) that allow ill-defined properties such as "geometric similarity" to be computed. However although many methods of generating these so called shape signatures have been proposed, little work on assessing how closely these measures match human perceptions of geometric similarity has been reported. This paper details the results of a trial that compared the part families identified by both human subjects and three published shape signatures. To do this a similarity matrix for the Drexel benchmark datasets was created by averaging the results of twelve manual inspections. Three different shape signatures (D2 shape distribution, spherical harmonics and surface portioning spectrum) were computed for each component in the dataset, and then used as input to a competitive neural network that sorted the objects into numbers of "similar" clusters. Comparison of human and machine generated clusters (i.e. families) of similar components allows the effectiveness of the signatures at duplicating human perceptions of shapes to be quantified. The work reported makes two contributions. Firstly the results of the human perception test suggest that the Drexel dataset contains objects whose perceived similarity levels ranged across the recorded spectrum (i.e. 0.1 to 0.9); Secondly the results obtained from benchmarking the three shape signatures against human perception demonstrate a low rate of false positives for all three signatures and a false negative rate that varied almost linearly with the amount of perceived similarity. In other words the shape signatures studied were reasonably effective at matching human perception in that they returned few wrong results and excluded parts in direct proportion to the level of similarity demanded by the user.

KW - geometric similarity

KW - shape perception

KW - D2 shape distribution

KW - spherical harmonics

KW - surface partitioning spectrum

KW - artificial neural networks

UR - http://dx.doi.org/10.1016/j.cad.2006.05.003

U2 - 10.1016/j.cad.2006.05.003

DO - 10.1016/j.cad.2006.05.003

M3 - Article

VL - 38

SP - 1038

EP - 1051

JO - Computer-Aided Design

T2 - Computer-Aided Design

JF - Computer-Aided Design

SN - 0010-4485

IS - 9

ER -