Chiming in as a member of the Wikimedia Foundation Research team (so you'll see that likely biases the examples I'm aware of). I'd say that the most common type of NLP that shows up in our applications is tokenization / language analysis -- i.e. split wikitext into words/sentences. As Trey said, this tokenization is non-trivial for English and gets much harder in other languages that have more complex constructions / don't use spaces to delimit words. These tokens often then become inputs into other types of models that aren't necessarily NLP. There are a number of more complex NLP technologies too that don't just identify words but try to identify similarities between them, translate them, etc.

Some examples below. Additionally, I indicated whether each application was rule-based (follow a series of deterministic heuristics) or ML (learned, probabilistic model) in case that's of interest:
I've also done some thinking that might be of interest about what a natural language modeling strategy looks like for Wikimedia that balances effectiveness of models with equity/sustainability of supporting so many different language communities: https://meta.wikimedia.org/wiki/User:Isaac_(WMF)/Language_modeling

Hope that helps.

Best,
Isaac


On Wed, Jun 22, 2022, 10:43 Trey Jones <tjones@wikimedia.org> wrote:
Do you have examples of projects using NLP in Wikimedia communities.

I do! Defining NLP is something of a moving target, and the most common definition, which I learned when I worked in industry, is that "NLP" has often been used as a buzzword that means "any language processing you do that your competitors don't". Getting away from profit-driven buzzwords, I have a pretty generous definition of NLP, as any software that improves language-based interactions between people and computers.

Guillaume mentioned CirrusSearch in general, but there are lots of specific parts within search. I work on a lot of NLP-type stuff for search, and I write a lot of documentation on Mediawiki, so this is biased towards stuff I have worked on or know about.

Language analysis is the general process of converting text (say, of Wikipedia articles) into tokens (approximately "words" in English) to be stored in the search index. There are lots of different levels of complexity in the language analysis. We currently use Elasticsearch, and they provide a lot of language-specific analysis tools (link to Elastic language analyzers), which we customize and build on.

Here is part of the config for English, reordered to be chronological, rather than alphabetical, and annotated:

"text": {
    "type": "custom",
    "char_filter": [
        "word_break_helper", — break_up.words:with(uncommon)separators
        "kana_map" — map Japanese Hiragana to Katakana (notes)
    ],
    "tokenizer": "standard" — break text into tokens/words; not trivial for English, very hard for other languages (blog post)
    "filter": [
        "aggressive_splitting",
 —splitting of more likely multi-part ComplexTokens
        "homoglyph_norm",
 —correct typos/vandalization which mix Latin and Cyrillic letters (notes)
        "possessive_english",
 —special processing for English's possessive forms
        "icu_normalizer",
—normalization of text (blog post)
        "stop", —removal of stop words (blog post, section "To be or not to be indexed")
        "icu_folding", —more aggressive normalization
        "remove_empty", —misc bookkeeping
        "kstem", —stemming (blog post)
        "custom_stem" —more stemming
    ],
},

Tokenization, normalization, and stemming can vary wildly between languages. Some other elements (from Elasticsearch or custom-built by us):
  • Stemmers and stop words for specific languages, including some open-source ones that we ported, and some developed with community help.
  • Elision processing (l'homme == homme)
  • Normalization for digits (١ ٢ ٣ / १ २ ३ / ①②③ / 123)
  • Custom lowercasing—Greek, Irish, and Turkish have special processing (notes)
  • Normalization of written Khmer (blog post)
  • Notes on lots more...
We also did some work improving "Did you mean" suggestions, which currently uses both the built-in suggestions from Elasticsearch (not always great, but there are lots of them) and new suggestions from a module we called "Glent" (much better, but not as many suggestions).

We have some custom language detection available on some Wikipedias, so that if you don't get very many results and your query looks like it is another language, we show results from that other language. Example, searching for Том Хэнкс on English Wikipedia will show results from Russian Wikipedia. (too many notes)

Outside of our search work, there are lots more. Some that come to mind:
  • Language Converter supports languages with multiple writing systems, which is sometimes easy and sometimes really hard. (blog post)
  • There's a Wikidata gadget on French Wikipedia and others that appends results from Wikidata and generates descriptions in various languages based on the Wikidata information. For example, searching for Molenstraat Vught on French Wikipedia, gives no local results, but shows two "Results from Wikidata" / "Résultats sur Wikidata" (if you are logged in you get results in your preferred language, if possible, otherwise the language of the project):
    • Molenstraat ; hameau de la commune de Vught (in French, when I'm not logged in)
    • Molenstraat ; street in Vught, the Netherlands (fallback to English for some reason)
  • The whole giant Content Translation project that uses machine translation to assist translating articles across wikis. (blog post)
There's lots more out there, I'm sure—but I gotta run!
—Trey

Trey Jones
Staff Computational Linguist, Search Platform
Wikimedia Foundation

UTC–4 / EDT

 
_______________________________________________
Wikitech-l mailing list -- wikitech-l@lists.wikimedia.org
To unsubscribe send an email to wikitech-l-leave@lists.wikimedia.org
https://lists.wikimedia.org/postorius/lists/wikitech-l.lists.wikimedia.org/