A new model for semantic photograph description combining basic levels and user-assigned descriptors

Hyuk-Jin Lee, Diane Neal

Research output: Contribution to journalArticlepeer-review

18 Citations (Scopus)


Few studies have been conducted to identify users’ desired semantic levels of image access when describing, searching, and retrieving photographs online. The basic level, or the level of abstraction most commonly used to describe an item, is a cognitive theory currently under consideration in image retrieval research. This study investigates potential basic levels of description for online photographs by testing the Hierarchy for Online Photograph Representation (HOPR) model, which is based on a need for a model that addresses users’ basic levels of photograph description and retrieval. We developed the HOPR model using the following three elements as guides: the most popular tags of all time on Flickr, the Pyramid model for visual content description by Jörgensen, Jaimes, Benitez, and Chang, and the nine classes of image content put forth by Burford, Briggs, and Eakins. In an exploratory test of the HOPR model, participants were asked to describe their first reaction to, and possible free-text indexing terms for, a small set of personal photographs. Content analysis of the data indicated a clear set of user preferences that are consistent with prior image description studies. Generally speaking, objects in the photograph and events taking place in the photograph were the most commonly used levels of description. The preliminary HOPR model shows promise for its intended utility, but further refinement is needed through additional research.
Original languageEnglish
Pages (from-to)547-565
Number of pages19
JournalJournal of Information Science
Issue number5
Early online date22 Jul 2010
Publication statusPublished - 1 Oct 2010
Externally publishedYes


  • entry point
  • image indexing
  • image retrieval


Dive into the research topics of 'A new model for semantic photograph description combining basic levels and user-assigned descriptors'. Together they form a unique fingerprint.

Cite this