I don't know if there's anyone within Wikimedia doing any kind of research like this, but it might be interesting if we had someone attending this to see if there's anything that we could use to improve our own image search techniques.
I recall we had a bot at one stage on Commons that would grab the local captions of images used in various projects and then put those captions back on the Commons image page. Seems a bit the same :)
cheers, Brianna user:pfctdayelise
See message 2 at http://linguistlist.org/issues/18/18-2794.html#2
Date: 21-Jan-2008 - 21-Jan-2008 Location: Funchal, Madeira, Portugal Web Site: http://www.visapp.org/MMIU.htm
Scope The number of digital images being generated, stored, managed and shared through the internet is growing at a phenomenal rate. Press and photo agencies receive and manage thousands or millions of images per day and end-users (e.g. amateur reporters) can easily participate into the related professional workflows. In an environment of approximately one billion photos, searchable in online databases worldwide, finding the most relevant or the most appealing image for a given task (e.g. to illustrate a story) has become an extremely difficult process. In these huge repositories, many images have additional information coming from different sources.
Information related to the image capture such as date, location, camera settings or name of photographer is often available from the digital camera used to take the photograph. The owner can further add a relevant title, filename or/and descriptive caption or any other textual reference. If the image is uploaded to a shared photo collection, additional comments are frequently added to the image by other users. On the other hand, images used in documents, i.e. web pages, frequently have captions and surrounding text. All this information can be considered image metadata and is of value for organizing, sharing, and processing images.
However, it is not always evident how to exploit the information contained in such metadata in an intelligent, generic or task-specific way. Linking this information with the actual image content is still an open challenge. The aim of this workshop is to offer a meeting opportunity for researchers, content providers and related user-service providers to elaborate on the needs and practices of digital image management, to share ideas that will point to new directions on using metadata for image understanding and to demonstrate related technology representative of the state of the art and beyond.
Research paper topics that may be addressed include, but are not limited to:
- image metadata pattern discovery and mining - interaction of image metadata and visual content - image and video metadata enrichment - automatic metadata creation - hybrid collaborative and machine learning techniques for metadata creation and/or fusion - cross image-text categorization and retrieval - image auto-captioning and annotation transfer - learning user preferences, aesthetical and emotional measures from opinion mining - integration of camera settings with image categorization, retrieval or enhancement - application-specific issues of metadata mining: integration of visual and geo-location information for improved virtual tourism stock-photo web-based image retrieval
Important Dates Full Paper Submission: October 15, 2007 Authors Notification: November 7, 2007 Final Paper Submission and Registration: November 19, 2007