Hi everyone,
This month's showcase focused on *supporting multimedia on Wikipedia* will start in about 45 minutes. Please join us at https://www.youtube.com/watch?v=wpSQD9Bc8Ek.
On Tue, Apr 16, 2024 at 8:25 AM Kinneret Gordon kgordon@wikimedia.org wrote:
Hi everyone,
The next Research Showcase will be live-streamed tomorrow Wednesday, April 17, at 9:30 AM PST / 16:30 UTC. Find your local time here. The theme for this showcase is Supporting Multimedia on Wikipedia.
You are welcome to watch via the YouTube stream: https://www.youtube.com/watch?v=wpSQD9Bc8Ek. As usual, you can join the conversation in the YouTube chat as soon as the showcase goes live.
This month's presentations:
Towards image accessibility solutions grounded in communicative principles
By Elisa Kreiss
Images have become an omnipresent communicative tool -- and this is no exception on Wikipedia. However, the undeniable benefits they carry for sighted communicators turns into a serious accessibility challenge for people who are blind or have low vision (BLV). BLV users often have to rely on textual descriptions of those images to equally participate in an ever-increasing image-dominated online lifestyle. In this talk, I will present how framing accessibility as a communication problem highlights important ways forward in redefining image accessibility on Wikipedia. I will present the Wikipedia-based dataset Concadia and use it to discuss the successes and shortcomings of image captions and alt texts for accessibility, and how the usefulness of accessibility descriptions is fundamentally contextual. I will conclude by highlighting the potential and risks of AI-based solutions and discussing implications for different Wikipedia editing communities.
Automatic Multi-Path Web Story Creation from a Structural Article
By Daniel Nkemelu
Web articles such as Wikipedia serve as one of the major sources of knowledge dissemination and online learning. However, their in-depth information--often in a dense text format--may not be suitable for mobile browsing, even in a responsive user interface. We propose an automatic approach that converts a structured article of any length into a set of interactive Web Stories that are ideal for mobile experiences. We focused on Wikipedia articles and developed Wiki2Story, a pipeline based on language and layout models, to demonstrate the concept. Wiki2Story dynamically slices an article and plans one to multiple Story paths according to the document hierarchy. For each slice, it generates a multi-page summary Story composed of text and image pairs in visually appealing layouts. We derived design principles from an analysis of manually created Story practices. We executed our pipeline on 500 Wikipedia documents and conducted user studies to review selected outputs. Results showed that Wiki2Story effectively captured and presented salient content from the original articles and sparked interest in viewers.
--
Kinneret Gordon
Lead Research Community Officer
Wikimedia Foundation