FYI, OpenAI's nonprofit parent has launched a new grant program for responsible AI:
https://openai.com/blog/democratic-inputs-to-ai
This is not concerned with the problem that's probably of highest concern to Wikimedians (AI as bullshit generators) but with the longer term steering of AI as it becomes more capable. I expect critics will see it as more evidence that OpenAI is deflecting from the harm their systems are doing today by focusing attention on long-term hypotheticals. In typical OpenAI fashion, they speak of AGI (human-level intelligence) and superintelligence.
Wikipedia is the very first example they cite for "creative approaches that inspire us". The example in their mockup, of deciding on whether an AI should provide advice on recreational drug use, is also the kind of thing that should be familiar to folks who've been part of content policy discussions. Their mockup also reminded me a bit of NPOV in its attempt to arrive at a formulation that is widely agreeable.
Individuals and orgs can apply; I did not see any country exclusions but I didn't see a way to get to the fineprint without applying. Deadline is June 24.
I don't intend to apply but if there are any applicants from Wikimedia-land, I'd be happy to help with suggestions/input/review, if wanted :).
Warmly, Erik
wikimedia-l@lists.wikimedia.org