jenkins-bot has submitted this change. ( https://gerrit.wikimedia.org/r/c/pywikibot/core/+/597350 )
Change subject: [scripts] Add dataextend.py script contributed by Andre Engels
......................................................................
[scripts] Add dataextend.py script contributed by Andre Engels
Change-Id: I221925e85bd426a2fe020b5f33ae91e59f6ac39d
---
M docs/scripts/wikibase.rst
M docs/scripts_ref/scripts.rst
A scripts/dataextend.py
M tests/script_tests.py
M tox.ini
5 files changed, 12,711 insertions(+), 0 deletions(-)
Approvals:
Xqt: Looks good to me, approved
jenkins-bot: Verified
--
To view, visit https://gerrit.wikimedia.org/r/c/pywikibot/core/+/597350
To unsubscribe, or for help writing mail filters, visit https://gerrit.wikimedia.org/r/settings
Gerrit-Project: pywikibot/core
Gerrit-Branch: master
Gerrit-Change-Id: I221925e85bd426a2fe020b5f33ae91e59f6ac39d
Gerrit-Change-Number: 597350
Gerrit-PatchSet: 15
Gerrit-Owner: Andre Engels <andreengels(a)gmail.com>
Gerrit-Reviewer: Xqt <info(a)gno.de>
Gerrit-Reviewer: jenkins-bot
Gerrit-CC: Dvorapa <dvorapa(a)seznam.cz>
Gerrit-CC: Meno25 <meno25mail(a)gmail.com>
Gerrit-MessageType: merged
Xqt has submitted this change. ( https://gerrit.wikimedia.org/r/c/pywikibot/core/+/778470 )
Change subject: [IMPR] Move User to pages._user (step1)
......................................................................
[IMPR] Move User to pages._user (step1)
Change-Id: Idfb75f94c0c4478931803626db349377cad355f5
---
C pywikibot/page/_user.py
R pywikibot/page/pages.py
2 files changed, 0 insertions(+), 0 deletions(-)
Approvals:
Xqt: Verified; Looks good to me, approved
diff --git a/pywikibot/page/_pages.py b/pywikibot/page/_user.py
similarity index 100%
copy from pywikibot/page/_pages.py
copy to pywikibot/page/_user.py
diff --git a/pywikibot/page/_pages.py b/pywikibot/page/pages.py
similarity index 100%
rename from pywikibot/page/_pages.py
rename to pywikibot/page/pages.py
--
To view, visit https://gerrit.wikimedia.org/r/c/pywikibot/core/+/778470
To unsubscribe, or for help writing mail filters, visit https://gerrit.wikimedia.org/r/settings
Gerrit-Project: pywikibot/core
Gerrit-Branch: master
Gerrit-Change-Id: Idfb75f94c0c4478931803626db349377cad355f5
Gerrit-Change-Number: 778470
Gerrit-PatchSet: 1
Gerrit-Owner: Xqt <info(a)gno.de>
Gerrit-Reviewer: Xqt <info(a)gno.de>
Gerrit-Reviewer: jenkins-bot
Gerrit-MessageType: merged
Xqt has submitted this change. ( https://gerrit.wikimedia.org/r/c/pywikibot/core/+/778471 )
Change subject: [IMPR] Move User to pages._user (step2)
......................................................................
[IMPR] Move User to pages._user (step2)
Change-Id: I97035c8a7eef4247dbcc0ac871526d9d7cab77d9
---
M pywikibot/CONTENT.rst
M pywikibot/page/__init__.py
R pywikibot/page/_pages.py
M pywikibot/page/_user.py
4 files changed, 38 insertions(+), 3,062 deletions(-)
Approvals:
jenkins-bot: Verified
Xqt: Looks good to me, approved
diff --git a/pywikibot/CONTENT.rst b/pywikibot/CONTENT.rst
index 54b545e..78a27be 100644
--- a/pywikibot/CONTENT.rst
+++ b/pywikibot/CONTENT.rst
@@ -116,6 +116,8 @@
+----------------------------+------------------------------------------------------+
| _revision.py | Object representing page revision |
+----------------------------+------------------------------------------------------+
+ | _user.py | Object representing a wiki user |
+ +----------------------------+------------------------------------------------------+
| _wikibase.py | Objects representing wikibase structures |
+----------------------------+------------------------------------------------------+
diff --git a/pywikibot/page/__init__.py b/pywikibot/page/__init__.py
index c1431b0..7434b8f 100644
--- a/pywikibot/page/__init__.py
+++ b/pywikibot/page/__init__.py
@@ -8,8 +8,9 @@
from pywikibot.page._filepage import FileInfo, FilePage
from pywikibot.page._links import BaseLink, Link, SiteLink, html2unicode
-from pywikibot.page._pages import BasePage, Category, Page, User
+from pywikibot.page._pages import BasePage, Category, Page
from pywikibot.page._revision import Revision
+from pywikibot.page._user import User
from pywikibot.page._wikibase import (
Claim,
ItemPage,
diff --git a/pywikibot/page/pages.py b/pywikibot/page/_pages.py
similarity index 86%
rename from pywikibot/page/pages.py
rename to pywikibot/page/_pages.py
index 67977b8..dfe2945 100644
--- a/pywikibot/page/pages.py
+++ b/pywikibot/page/_pages.py
@@ -5,7 +5,6 @@
- BasePage: Base object for a MediaWiki page
- Page: A MediaWiki page
- Category: A page in the Category: namespace
-- User: A class that represents a Wiki user
Various Wikibase pages are defined in ``page._wikibase.py``,
various pages for Proofread Extensions are defines in
@@ -31,11 +30,9 @@
import pywikibot
from pywikibot import config, date, i18n, textlib
-from pywikibot.backports import Generator, Iterable, List, Tuple
+from pywikibot.backports import Generator, Iterable, List
from pywikibot.cosmetic_changes import CANCEL, CosmeticChangesToolkit
from pywikibot.exceptions import (
- APIError,
- AutoblockUserError,
Error,
InterwikiRedirectPageError,
InvalidPageError,
@@ -43,23 +40,18 @@
IsRedirectPageError,
NoMoveTargetError,
NoPageError,
- NotEmailableError,
NoUsernameError,
OtherPageSaveError,
PageSaveRelatedError,
SectionError,
UnknownExtensionError,
- UserRightsError,
)
from pywikibot.page._decorators import allow_asynchronous
from pywikibot.page._links import BaseLink, Link
-from pywikibot.page._revision import Revision
from pywikibot.site import Namespace, NamespaceArgType
from pywikibot.tools import (
ComparableMixin,
- deprecated,
first_upper,
- is_ip_address,
issue_deprecation_warning,
remove_last_args,
)
@@ -71,7 +63,6 @@
'BasePage',
'Category',
'Page',
- 'User',
)
@@ -178,12 +169,11 @@
"""Return the Site object for the data repository."""
return self.site.data_repository()
- def namespace(self):
+ def namespace(self) -> Namespace:
"""
Return the namespace of the page.
:return: namespace of the page
- :rtype: pywikibot.Namespace
"""
return self._link.namespace
@@ -737,11 +727,8 @@
return self._lastNonBotUser
- def editTime(self):
- """Return timestamp of last revision to page.
-
- :rtype: pywikibot.Timestamp
- """
+ def editTime(self) -> pywikibot.Timestamp:
+ """Return timestamp of last revision to page."""
return self.latest_revision.timestamp
def exists(self) -> bool:
@@ -820,12 +807,8 @@
return bool(self._catredirect)
- def getCategoryRedirectTarget(self):
- """
- If this is a category redirect, return the target category title.
-
- :rtype: pywikibot.page.Category
- """
+ def getCategoryRedirectTarget(self) -> 'Category':
+ """If this is a category redirect, return the target category title."""
if self.isCategoryRedirect():
return Category(Link(self._catredirect, self.site))
raise IsNotRedirectPageError(self)
@@ -835,7 +818,7 @@
ns = self.namespace()
return ns >= 0 and ns % 2 == 1
- def toggleTalkPage(self):
+ def toggleTalkPage(self) -> Optional['Page']:
"""
Return other member of the article-talk page pair for this Page.
@@ -844,7 +827,6 @@
not actually exist on the wiki.
:return: Page or None if self is a special page.
- :rtype: typing.Optional[pywikibot.Page]
"""
ns = self.namespace()
if ns < 0: # Special page
@@ -1743,13 +1725,14 @@
if not contributors:
return sum(cnt.values())
- if isinstance(contributors, User):
+ if isinstance(contributors, pywikibot.User):
contributors = contributors.username
if isinstance(contributors, str):
return cnt[contributors]
- return sum(cnt[user.username] if isinstance(user, User) else cnt[user]
+ return sum(cnt[user.username]
+ if isinstance(user, pywikibot.User) else cnt[user]
for user in contributors)
def merge_history(self, dest, timestamp=None, reason=None) -> None:
@@ -2623,408 +2606,3 @@
assert total is None or total > 0, \
'As many items as given in total already returned'
yield from check_cache(pywikibot.Timestamp.min)
-
-
-class User(Page):
-
- """
- A class that represents a Wiki user.
-
- This class also represents the Wiki page User:<username>
- """
-
- def __init__(self, source, title: str = '') -> None:
- """
- Initializer for a User object.
-
- All parameters are the same as for Page() Initializer.
- """
- self._isAutoblock = True
- if title.startswith('#'):
- title = title[1:]
- elif ':#' in title:
- title = title.replace(':#', ':')
- else:
- self._isAutoblock = False
- super().__init__(source, title, ns=2)
- if self.namespace() != 2:
- raise ValueError("'{}' is not in the user namespace!"
- .format(self.title()))
- if self._isAutoblock:
- # This user is probably being queried for purpose of lifting
- # an autoblock.
- pywikibot.output(
- 'This is an autoblock ID, you can only use to unblock it.')
-
- @property
- def username(self) -> str:
- """
- The username.
-
- Convenience method that returns the title of the page with
- namespace prefix omitted, which is the username.
- """
- if self._isAutoblock:
- return '#' + self.title(with_ns=False)
- return self.title(with_ns=False)
-
- def isRegistered(self, force: bool = False) -> bool:
- """
- Determine if the user is registered on the site.
-
- It is possible to have a page named User:xyz and not have
- a corresponding user with username xyz.
-
- The page does not need to exist for this method to return
- True.
-
- :param force: if True, forces reloading the data from API
- """
- # T135828: the registration timestamp may be None but the key exists
- return (not self.isAnonymous()
- and 'registration' in self.getprops(force))
-
- def isAnonymous(self) -> bool:
- """Determine if the user is editing as an IP address."""
- return is_ip_address(self.username)
-
- def getprops(self, force: bool = False) -> dict:
- """
- Return a properties about the user.
-
- :param force: if True, forces reloading the data from API
- """
- if force and hasattr(self, '_userprops'):
- del self._userprops
- if not hasattr(self, '_userprops'):
- self._userprops = list(self.site.users([self.username, ]))[0]
- if self.isAnonymous():
- r = list(self.site.blocks(iprange=self.username, total=1))
- if r:
- self._userprops['blockedby'] = r[0]['by']
- self._userprops['blockreason'] = r[0]['reason']
- return self._userprops
-
- def registration(self, force: bool = False):
- """
- Fetch registration date for this user.
-
- :param force: if True, forces reloading the data from API
- :rtype: pywikibot.Timestamp or None
- """
- if not self.isAnonymous():
- reg = self.getprops(force).get('registration')
- if reg:
- return pywikibot.Timestamp.fromISOformat(reg)
- return None
-
- def editCount(self, force: bool = False) -> int:
- """
- Return edit count for a registered user.
-
- Always returns 0 for 'anonymous' users.
-
- :param force: if True, forces reloading the data from API
- """
- return self.getprops(force).get('editcount', 0)
-
- def is_blocked(self, force: bool = False) -> bool:
- """Determine whether the user is currently blocked.
-
- .. versionchanged:: 7.0
- renamed from :meth:`isBlocked` method,
- can also detect range blocks.
-
- :param force: if True, forces reloading the data from API
- """
- return 'blockedby' in self.getprops(force)
-
- @deprecated('is_blocked', since='7.0.0')
- def isBlocked(self, force: bool = False) -> bool:
- """Determine whether the user is currently blocked.
-
- .. deprecated:: 7.0
- use :meth:`is_blocked` instead
-
- :param force: if True, forces reloading the data from API
- """
- return self.is_blocked(force)
-
- def is_locked(self, force: bool = False) -> bool:
- """Determine whether the user is currently locked globally.
-
- .. versionadded:: 7.0
-
- :param force: if True, forces reloading the data from API
- """
- return self.site.is_locked(self.username, force)
-
- def isEmailable(self, force: bool = False) -> bool:
- """
- Determine whether emails may be send to this user through MediaWiki.
-
- :param force: if True, forces reloading the data from API
- """
- return not self.isAnonymous() and 'emailable' in self.getprops(force)
-
- def groups(self, force: bool = False) -> list:
- """
- Return a list of groups to which this user belongs.
-
- The list of groups may be empty.
-
- :param force: if True, forces reloading the data from API
- :return: groups property
- """
- return self.getprops(force).get('groups', [])
-
- def gender(self, force: bool = False) -> str:
- """Return the gender of the user.
-
- :param force: if True, forces reloading the data from API
- :return: return 'male', 'female', or 'unknown'
- """
- if self.isAnonymous():
- return 'unknown'
- return self.getprops(force).get('gender', 'unknown')
-
- def rights(self, force: bool = False) -> list:
- """Return user rights.
-
- :param force: if True, forces reloading the data from API
- :return: return user rights
- """
- return self.getprops(force).get('rights', [])
-
- def getUserPage(self, subpage: str = ''):
- """
- Return a Page object relative to this user's main page.
-
- :param subpage: subpage part to be appended to the main
- page title (optional)
- :type subpage: str
- :return: Page object of user page or user subpage
- :rtype: pywikibot.Page
- """
- if self._isAutoblock:
- # This user is probably being queried for purpose of lifting
- # an autoblock, so has no user pages per se.
- raise AutoblockUserError(
- 'This is an autoblock ID, you can only use to unblock it.')
- if subpage:
- subpage = '/' + subpage
- return Page(Link(self.title() + subpage, self.site))
-
- def getUserTalkPage(self, subpage: str = ''):
- """
- Return a Page object relative to this user's main talk page.
-
- :param subpage: subpage part to be appended to the main
- talk page title (optional)
- :type subpage: str
- :return: Page object of user talk page or user talk subpage
- :rtype: pywikibot.Page
- """
- if self._isAutoblock:
- # This user is probably being queried for purpose of lifting
- # an autoblock, so has no user talk pages per se.
- raise AutoblockUserError(
- 'This is an autoblock ID, you can only use to unblock it.')
- if subpage:
- subpage = '/' + subpage
- return Page(Link(self.username + subpage,
- self.site, default_namespace=3))
-
- def send_email(self, subject: str, text: str, ccme: bool = False) -> bool:
- """
- Send an email to this user via MediaWiki's email interface.
-
- :param subject: the subject header of the mail
- :param text: mail body
- :param ccme: if True, sends a copy of this email to the bot
- :raises NotEmailableError: the user of this User is not emailable
- :raises UserRightsError: logged in user does not have 'sendemail' right
- :return: operation successful indicator
- """
- if not self.isEmailable():
- raise NotEmailableError(self)
-
- if not self.site.has_right('sendemail'):
- raise UserRightsError("You don't have permission to send mail")
-
- params = {
- 'action': 'emailuser',
- 'target': self.username,
- 'token': self.site.tokens['email'],
- 'subject': subject,
- 'text': text,
- }
- if ccme:
- params['ccme'] = 1
- mailrequest = self.site.simple_request(**params)
- maildata = mailrequest.submit()
-
- if 'emailuser' in maildata \
- and maildata['emailuser']['result'] == 'Success':
- return True
- return False
-
- def block(self, *args, **kwargs):
- """
- Block user.
-
- Refer :py:obj:`APISite.blockuser` method for parameters.
-
- :return: None
- """
- try:
- self.site.blockuser(self, *args, **kwargs)
- except APIError as err:
- if err.code == 'invalidrange':
- raise ValueError('{} is not a valid IP range.'
- .format(self.username))
-
- raise err
-
- def unblock(self, reason: Optional[str] = None) -> None:
- """
- Remove the block for the user.
-
- :param reason: Reason for the unblock.
- """
- self.site.unblockuser(self, reason)
-
- def logevents(self, **kwargs):
- """Yield user activities.
-
- :keyword logtype: only iterate entries of this type
- (see mediawiki api documentation for available types)
- :type logtype: str
- :keyword page: only iterate entries affecting this page
- :type page: Page or str
- :keyword namespace: namespace to retrieve logevents from
- :type namespace: int or Namespace
- :keyword start: only iterate entries from and after this Timestamp
- :type start: Timestamp or ISO date string
- :keyword end: only iterate entries up to and through this Timestamp
- :type end: Timestamp or ISO date string
- :keyword reverse: if True, iterate oldest entries first
- (default: newest)
- :type reverse: bool
- :keyword tag: only iterate entries tagged with this tag
- :type tag: str
- :keyword total: maximum number of events to iterate
- :type total: int
- :rtype: iterable
- """
- return self.site.logevents(user=self.username, **kwargs)
-
- @property
- def last_event(self):
- """Return last user activity.
-
- :return: last user log entry
- :rtype: LogEntry or None
- """
- return next(iter(self.logevents(total=1)), None)
-
- def contributions(self, total: int = 500, **kwargs) -> tuple:
- """
- Yield tuples describing this user edits.
-
- Each tuple is composed of a pywikibot.Page object,
- the revision id (int), the edit timestamp (as a pywikibot.Timestamp
- object), and the comment (str).
- Pages returned are not guaranteed to be unique.
-
- :param total: limit result to this number of pages
- :keyword start: Iterate contributions starting at this Timestamp
- :keyword end: Iterate contributions ending at this Timestamp
- :keyword reverse: Iterate oldest contributions first (default: newest)
- :keyword namespaces: only iterate pages in these namespaces
- :type namespaces: iterable of str or Namespace key,
- or a single instance of those types. May be a '|' separated
- list of namespace identifiers.
- :keyword showMinor: if True, iterate only minor edits; if False and
- not None, iterate only non-minor edits (default: iterate both)
- :keyword top_only: if True, iterate only edits which are the latest
- revision (default: False)
- :return: tuple of pywikibot.Page, revid, pywikibot.Timestamp, comment
- """
- for contrib in self.site.usercontribs(
- user=self.username, total=total, **kwargs):
- ts = pywikibot.Timestamp.fromISOformat(contrib['timestamp'])
- yield (Page(self.site, contrib['title'], contrib['ns']),
- contrib['revid'],
- ts,
- contrib.get('comment'))
-
- @property
- def first_edit(self):
- """Return first user contribution.
-
- :return: first user contribution entry
- :return: tuple of pywikibot.Page, revid, pywikibot.Timestamp, comment
- :rtype: tuple or None
- """
- return next(self.contributions(reverse=True, total=1), None)
-
- @property
- def last_edit(self):
- """Return last user contribution.
-
- :return: last user contribution entry
- :return: tuple of pywikibot.Page, revid, pywikibot.Timestamp, comment
- :rtype: tuple or None
- """
- return next(self.contributions(total=1), None)
-
- def deleted_contributions(
- self, *, total: int = 500, **kwargs
- ) -> Iterable[Tuple[Page, Revision]]:
- """Yield tuples describing this user's deleted edits.
-
- .. versionadded:: 5.5
-
- :param total: Limit results to this number of pages
- :keyword start: Iterate contributions starting at this Timestamp
- :keyword end: Iterate contributions ending at this Timestamp
- :keyword reverse: Iterate oldest contributions first (default: newest)
- :keyword namespaces: Only iterate pages in these namespaces
- """
- for data in self.site.alldeletedrevisions(user=self.username,
- total=total, **kwargs):
- page = Page(self.site, data['title'], data['ns'])
- for contrib in data['revisions']:
- yield page, Revision(**contrib)
-
- def uploadedImages(self, total=10):
- """
- Yield tuples describing files uploaded by this user.
-
- Each tuple is composed of a pywikibot.Page, the timestamp (str in
- ISO8601 format), comment (str) and a bool for pageid > 0.
- Pages returned are not guaranteed to be unique.
-
- :param total: limit result to this number of pages
- :type total: int
- """
- if not self.isRegistered():
- return
- for item in self.logevents(logtype='upload', total=total):
- yield (item.page(),
- str(item.timestamp()),
- item.comment(),
- item.pageid() > 0)
-
- @property
- def is_thankable(self) -> bool:
- """
- Determine if the user has thanks notifications enabled.
-
- NOTE: This doesn't accurately determine if thanks is enabled for user.
- Privacy of thanks preferences is under discussion, please see
- https://phabricator.wikimedia.org/T57401#2216861, and
- https://phabricator.wikimedia.org/T120753#1863894
- """
- return self.isRegistered() and 'bot' not in self.groups()
diff --git a/pywikibot/page/_user.py b/pywikibot/page/_user.py
index 67977b8..dc96be0 100644
--- a/pywikibot/page/_user.py
+++ b/pywikibot/page/_user.py
@@ -1,2628 +1,26 @@
-"""Objects representing various types of MediaWiki pages.
-
-This module includes objects:
-
-- BasePage: Base object for a MediaWiki page
-- Page: A MediaWiki page
-- Category: A page in the Category: namespace
-- User: A class that represents a Wiki user
-
-Various Wikibase pages are defined in ``page._wikibase.py``,
-various pages for Proofread Extensions are defines in
-``pywikibot.proofreadpage``.
-
-..note::
- `Link` objects represent a wiki-page's title, while
- :class:`pywikibot.Page` objects (defined here) represent the page
- itself, including its contents.
-"""
+"""Object representing a Wiki user."""
#
-# (C) Pywikibot team, 2008-2022
+# (C) Pywikibot team, 2009-2022
#
# Distributed under the terms of the MIT license.
#
-import re
-from collections import Counter, defaultdict
-from contextlib import suppress
-from textwrap import shorten, wrap
-from typing import Optional, Union
-from urllib.parse import quote_from_bytes
-from warnings import warn
+from typing import Optional
import pywikibot
-from pywikibot import config, date, i18n, textlib
-from pywikibot.backports import Generator, Iterable, List, Tuple
-from pywikibot.cosmetic_changes import CANCEL, CosmeticChangesToolkit
+from pywikibot.backports import Iterable, Tuple
from pywikibot.exceptions import (
APIError,
AutoblockUserError,
- Error,
- InterwikiRedirectPageError,
- InvalidPageError,
- IsNotRedirectPageError,
- IsRedirectPageError,
- NoMoveTargetError,
- NoPageError,
NotEmailableError,
- NoUsernameError,
- OtherPageSaveError,
- PageSaveRelatedError,
- SectionError,
- UnknownExtensionError,
UserRightsError,
)
-from pywikibot.page._decorators import allow_asynchronous
-from pywikibot.page._links import BaseLink, Link
+from pywikibot.page._links import Link
+from pywikibot.page._pages import Page
from pywikibot.page._revision import Revision
-from pywikibot.site import Namespace, NamespaceArgType
-from pywikibot.tools import (
- ComparableMixin,
- deprecated,
- first_upper,
- is_ip_address,
- issue_deprecation_warning,
- remove_last_args,
-)
+from pywikibot.tools import deprecated, is_ip_address
-PROTOCOL_REGEX = r'\Ahttps?://'
-
-__all__ = (
- 'BasePage',
- 'Category',
- 'Page',
- 'User',
-)
-
-
-class BasePage(ComparableMixin):
-
- """
- BasePage: Base object for a MediaWiki page.
-
- This object only implements internally methods that do not require
- reading from or writing to the wiki. All other methods are delegated
- to the Site object.
-
- Will be subclassed by Page, WikibasePage, and FlowPage.
- """
-
- _cache_attrs = (
- '_text', '_pageid', '_catinfo', '_templates', '_protection',
- '_contentmodel', '_langlinks', '_isredir', '_coords',
- '_preloadedtext', '_timestamp', '_applicable_protections',
- '_flowinfo', '_quality', '_pageprops', '_revid', '_quality_text',
- '_pageimage', '_item', '_lintinfo',
- )
-
- def __init__(self, source, title: str = '', ns=0) -> None:
- """
- Instantiate a Page object.
-
- Three calling formats are supported:
-
- - If the first argument is a Page, create a copy of that object.
- This can be used to convert an existing Page into a subclass
- object, such as Category or FilePage. (If the title is also
- given as the second argument, creates a copy with that title;
- this is used when pages are moved.)
- - If the first argument is a Site, create a Page on that Site
- using the second argument as the title (may include a section),
- and the third as the namespace number. The namespace number is
- mandatory, even if the title includes the namespace prefix. This
- is the preferred syntax when using an already-normalized title
- obtained from api.php or a database dump. WARNING: may produce
- invalid objects if page title isn't in normal form!
- - If the first argument is a BaseLink, create a Page from that link.
- This is the preferred syntax when using a title scraped from
- wikitext, URLs, or another non-normalized source.
-
- :param source: the source of the page
- :type source: pywikibot.page.BaseLink (or subclass),
- pywikibot.page.Page (or subclass), or pywikibot.page.Site
- :param title: normalized title of the page; required if source is a
- Site, ignored otherwise
- :type title: str
- :param ns: namespace number; required if source is a Site, ignored
- otherwise
- :type ns: int
- """
- if title is None:
- raise ValueError('Title cannot be None.')
-
- if isinstance(source, pywikibot.site.BaseSite):
- self._link = Link(title, source=source, default_namespace=ns)
- self._revisions = {}
- elif isinstance(source, Page):
- # copy all of source's attributes to this object
- # without overwriting non-None values
- self.__dict__.update((k, v) for k, v in source.__dict__.items()
- if k not in self.__dict__
- or self.__dict__[k] is None)
- if title:
- # overwrite title
- self._link = Link(title, source=source.site,
- default_namespace=ns)
- elif isinstance(source, BaseLink):
- self._link = source
- self._revisions = {}
- else:
- raise Error(
- "Invalid argument type '{}' in Page initializer: {}"
- .format(type(source), source))
-
- @property
- def site(self):
- """Return the Site object for the wiki on which this Page resides.
-
- :rtype: pywikibot.Site
- """
- return self._link.site
-
- def version(self):
- """
- Return MediaWiki version number of the page site.
-
- This is needed to use @need_version() decorator for methods of
- Page objects.
- """
- return self.site.version()
-
- @property
- def image_repository(self):
- """Return the Site object for the image repository."""
- return self.site.image_repository()
-
- @property
- def data_repository(self):
- """Return the Site object for the data repository."""
- return self.site.data_repository()
-
- def namespace(self):
- """
- Return the namespace of the page.
-
- :return: namespace of the page
- :rtype: pywikibot.Namespace
- """
- return self._link.namespace
-
- @property
- def content_model(self):
- """
- Return the content model for this page.
-
- If it cannot be reliably determined via the API,
- None is returned.
- """
- if not hasattr(self, '_contentmodel'):
- self.site.loadpageinfo(self)
- return self._contentmodel
-
- @property
- def depth(self):
- """Return the depth/subpage level of the page."""
- if not hasattr(self, '_depth'):
- # Check if the namespace allows subpages
- if self.namespace().subpages:
- self._depth = self.title().count('/')
- else:
- # Does not allow subpages, which means depth is always 0
- self._depth = 0
-
- return self._depth
-
- @property
- def pageid(self) -> int:
- """
- Return pageid of the page.
-
- :return: pageid or 0 if page does not exist
- """
- if not hasattr(self, '_pageid'):
- self.site.loadpageinfo(self)
- return self._pageid
-
- def title(
- self,
- *,
- underscore: bool = False,
- with_ns: bool = True,
- with_section: bool = True,
- as_url: bool = False,
- as_link: bool = False,
- allow_interwiki: bool = True,
- force_interwiki: bool = False,
- textlink: bool = False,
- as_filename: bool = False,
- insite=None,
- without_brackets: bool = False
- ) -> str:
- """
- Return the title of this Page, as a string.
-
- :param underscore: (not used with as_link) if true, replace all ' '
- characters with '_'
- :param with_ns: if false, omit the namespace prefix. If this
- option is false and used together with as_link return a labeled
- link like [[link|label]]
- :param with_section: if false, omit the section
- :param as_url: (not used with as_link) if true, quote title as if in an
- URL
- :param as_link: if true, return the title in the form of a wikilink
- :param allow_interwiki: (only used if as_link is true) if true, format
- the link as an interwiki link if necessary
- :param force_interwiki: (only used if as_link is true) if true, always
- format the link as an interwiki link
- :param textlink: (only used if as_link is true) if true, place a ':'
- before Category: and Image: links
- :param as_filename: (not used with as_link) if true, replace any
- characters that are unsafe in filenames
- :param insite: (only used if as_link is true) a site object where the
- title is to be shown. Default is the current family/lang given by
- -family and -lang or -site option i.e. config.family and
- config.mylang
- :param without_brackets: (cannot be used with as_link) if true, remove
- the last pair of brackets(usually removes disambiguation brackets).
- """
- title = self._link.canonical_title()
- label = self._link.title
- if with_section and self.section():
- section = '#' + self.section()
- else:
- section = ''
- if as_link:
- if insite:
- target_code = insite.code
- target_family = insite.family.name
- else:
- target_code = config.mylang
- target_family = config.family
- if force_interwiki \
- or (allow_interwiki
- and (self.site.family.name != target_family
- or self.site.code != target_code)):
- if self.site.family.name not in (
- target_family, self.site.code):
- title = '{site.family.name}:{site.code}:{title}'.format(
- site=self.site, title=title)
- else:
- # use this form for sites like commons, where the
- # code is the same as the family name
- title = '{}:{}'.format(self.site.code, title)
- elif textlink and (self.is_filepage() or self.is_categorypage()):
- title = ':{}'.format(title)
- elif self.namespace() == 0 and not section:
- with_ns = True
- if with_ns:
- return '[[{}{}]]'.format(title, section)
- return '[[{}{}|{}]]'.format(title, section, label)
- if not with_ns and self.namespace() != 0:
- title = label + section
- else:
- title += section
- if without_brackets:
- brackets_re = r'\s+\([^()]+?\)$'
- title = re.sub(brackets_re, '', title)
- if underscore or as_url:
- title = title.replace(' ', '_')
- if as_url:
- encoded_title = title.encode(self.site.encoding())
- title = quote_from_bytes(encoded_title, safe='')
- if as_filename:
- # Replace characters that are not possible in file names on some
- # systems, but still are valid in MediaWiki titles:
- # Unix: /
- # MediaWiki: /:\
- # Windows: /:\"?*
- # Spaces are possible on most systems, but are bad for URLs.
- for forbidden in ':*?/\\" ':
- title = title.replace(forbidden, '_')
- return title
-
- def section(self) -> Optional[str]:
- """
- Return the name of the section this Page refers to.
-
- The section is the part of the title following a '#' character, if
- any. If no section is present, return None.
- """
- try:
- section = self._link.section
- except AttributeError:
- section = None
- return section
-
- def __str__(self) -> str:
- """Return a string representation."""
- return self.title(as_link=True, force_interwiki=True)
-
- def __repr__(self) -> str:
- """Return a more complete string representation."""
- return '{}({!r})'.format(self.__class__.__name__, self.title())
-
- def _cmpkey(self):
- """
- Key for comparison of Page objects.
-
- Page objects are "equal" if and only if they are on the same site
- and have the same normalized title, including section if any.
-
- Page objects are sortable by site, namespace then title.
- """
- return (self.site, self.namespace(), self.title())
-
- def __hash__(self):
- """
- A stable identifier to be used as a key in hash-tables.
-
- This relies on the fact that the string
- representation of an instance cannot change after the construction.
- """
- return hash(self._cmpkey())
-
- def full_url(self):
- """Return the full URL."""
- return self.site.base_url(
- self.site.articlepath.format(self.title(as_url=True)))
-
- def autoFormat(self):
- """
- Return :py:obj:`date.getAutoFormat` dictName and value, if any.
-
- Value can be a year, date, etc., and dictName is 'YearBC',
- 'Year_December', or another dictionary name. Please note that two
- entries may have exactly the same autoFormat, but be in two
- different namespaces, as some sites have categories with the
- same names. Regular titles return (None, None).
- """
- if not hasattr(self, '_autoFormat'):
- self._autoFormat = date.getAutoFormat(
- self.site.lang,
- self.title(with_ns=False)
- )
- return self._autoFormat
-
- def isAutoTitle(self):
- """Return True if title of this Page is in the autoFormat dict."""
- return self.autoFormat()[0] is not None
-
- def get(self, force: bool = False, get_redirect: bool = False) -> str:
- """Return the wiki-text of the page.
-
- This will retrieve the page from the server if it has not been
- retrieved yet, or if force is True. This can raise the following
- exceptions that should be caught by the calling code:
-
- :exception pywikibot.exceptions.NoPageError: The page does not exist
- :exception pywikibot.exceptions.IsRedirectPageError: The page is a
- redirect. The argument of the exception is the title of the page
- it redirects to.
- :exception pywikibot.exceptions.SectionError: The section does not
- exist on a page with a # link
-
- :param force: reload all page attributes, including errors.
- :param get_redirect: return the redirect text, do not follow the
- redirect, do not raise an exception.
- """
- if force:
- del self.latest_revision_id
- if hasattr(self, '_bot_may_edit'):
- del self._bot_may_edit
- try:
- self._getInternals()
- except IsRedirectPageError:
- if not get_redirect:
- raise
-
- return self.latest_revision.text
-
- def _latest_cached_revision(self):
- """Get the latest revision if cached and has text, otherwise None."""
- if (hasattr(self, '_revid') and self._revid in self._revisions
- and self._revisions[self._revid].text is not None):
- return self._revisions[self._revid]
- return None
-
- def _getInternals(self):
- """
- Helper function for get().
-
- Stores latest revision in self if it doesn't contain it, doesn't think.
- * Raises exceptions from previous runs.
- * Stores new exceptions in _getexception and raises them.
- """
- # Raise exceptions from previous runs
- if hasattr(self, '_getexception'):
- raise self._getexception
-
- # If not already stored, fetch revision
- if self._latest_cached_revision() is None:
- try:
- self.site.loadrevisions(self, content=True)
- except (NoPageError, SectionError) as e:
- self._getexception = e
- raise
-
- # self._isredir is set by loadrevisions
- if self._isredir:
- self._getexception = IsRedirectPageError(self)
- raise self._getexception
-
- @remove_last_args(['get_redirect'])
- def getOldVersion(self, oldid, force: bool = False) -> str:
- """Return text of an old revision of this page.
-
- :param oldid: The revid of the revision desired.
- """
- if force or oldid not in self._revisions \
- or self._revisions[oldid].text is None:
- self.site.loadrevisions(self, content=True, revids=oldid)
- return self._revisions[oldid].text
-
- def permalink(self, oldid=None, percent_encoded: bool = True,
- with_protocol: bool = False) -> str:
- """Return the permalink URL of an old revision of this page.
-
- :param oldid: The revid of the revision desired.
- :param percent_encoded: if false, the link will be provided
- without title uncoded.
- :param with_protocol: if true, http or https prefixes will be
- included before the double slash.
- """
- if percent_encoded:
- title = self.title(as_url=True)
- else:
- title = self.title(as_url=False).replace(' ', '_')
- return '{}//{}{}/index.php?title={}&oldid={}'.format(
- self.site.protocol() + ':' if with_protocol else '',
- self.site.hostname(),
- self.site.scriptpath(),
- title,
- oldid if oldid is not None else self.latest_revision_id)
-
- @property
- def latest_revision_id(self):
- """Return the current revision id for this page."""
- if not hasattr(self, '_revid'):
- self.revisions()
- return self._revid
-
- @latest_revision_id.deleter
- def latest_revision_id(self) -> None:
- """
- Remove the latest revision id set for this Page.
-
- All internal cached values specifically for the latest revision
- of this page are cleared.
-
- The following cached values are not cleared:
- - text property
- - page properties, and page coordinates
- - lastNonBotUser
- - isDisambig and isCategoryRedirect status
- - langlinks, templates and deleted revisions
- """
- # When forcing, we retry the page no matter what:
- # * Old exceptions do not apply any more
- # * Deleting _revid to force reload
- # * Deleting _redirtarget, that info is now obsolete.
- for attr in ['_redirtarget', '_getexception', '_revid']:
- if hasattr(self, attr):
- delattr(self, attr)
-
- @latest_revision_id.setter
- def latest_revision_id(self, value) -> None:
- """Set the latest revision for this Page."""
- del self.latest_revision_id
- self._revid = value
-
- @property
- def latest_revision(self):
- """Return the current revision for this page."""
- rev = self._latest_cached_revision()
- if rev is not None:
- return rev
-
- with suppress(StopIteration):
- return next(self.revisions(content=True, total=1))
- raise InvalidPageError(self)
-
- @property
- def text(self) -> str:
- """
- Return the current (edited) wikitext, loading it if necessary.
-
- :return: text of the page
- """
- if getattr(self, '_text', None) is not None:
- return self._text
-
- try:
- return self.get(get_redirect=True)
- except NoPageError:
- # TODO: what other exceptions might be returned?
- return ''
-
- @text.setter
- def text(self, value: Optional[str]):
- """Update the current (edited) wikitext.
-
- :param value: New value or None
- """
- try:
- self.botMayEdit() # T262136, T267770
- except Exception as e:
- # dry tests aren't able to make an API call
- # but are rejected by an Exception; ignore it then.
- if not str(e).startswith('DryRequest rejecting request:'):
- raise
-
- del self.text
- self._text = None if value is None else str(value)
-
- @text.deleter
- def text(self) -> None:
- """Delete the current (edited) wikitext."""
- if hasattr(self, '_text'):
- del self._text
- if hasattr(self, '_expanded_text'):
- del self._expanded_text
- if hasattr(self, '_raw_extracted_templates'):
- del self._raw_extracted_templates
-
- def preloadText(self) -> str:
- """
- The text returned by EditFormPreloadText.
-
- See API module "info".
-
- Application: on Wikisource wikis, text can be preloaded even if
- a page does not exist, if an Index page is present.
- """
- self.site.loadpageinfo(self, preload=True)
- return self._preloadedtext
-
- def get_parsed_page(self, force: bool = False) -> str:
- """Retrieve parsed text (via action=parse) and cache it.
-
- .. versionchanged:: 7.1
- `force` parameter was added;
- `_get_parsed_page` becomes a public method
-
- :param force: force updating from the live site
-
- .. seealso::
- :meth:`APISite.get_parsed_page()
- <pywikibot.site._apisite.APISite.get_parsed_page>`
- """
- if not hasattr(self, '_parsed_text') or force:
- self._parsed_text = self.site.get_parsed_page(self)
- return self._parsed_text
-
- def extract(self, variant: str = 'plain', *,
- lines: Optional[int] = None,
- chars: Optional[int] = None,
- sentences: Optional[int] = None,
- intro: bool = True) -> str:
- """Retrieve an extract of this page.
-
- .. versionadded:: 7.1
-
- :param variant: The variant of extract, either 'plain' for plain
- text, 'html' for limited HTML (both excludes templates and
- any text formatting) or 'wiki' for bare wikitext which also
- includes any templates for example.
- :param lines: if not None, wrap the extract into lines with
- width of 79 chars and return a string with that given number
- of lines.
- :param chars: How many characters to return. Actual text
- returned might be slightly longer.
- :param sentences: How many sentences to return
- :param intro: Return only content before the first section
- :raises NoPageError: given page does not exist
- :raises NotImplementedError: "wiki" variant does not support
- `sencence` parameter.
- :raises ValueError: `variant` parameter must be "plain", "html" or
- "wiki"
-
- .. seealso:: :meth:`APISite.extract()
- <pywikibot.site._extensions.TextExtractsMixin.extract>`.
- """
- if variant in ('plain', 'html'):
- extract = self.site.extract(self, chars=chars, sentences=sentences,
- intro=intro,
- plaintext=variant == 'plain')
- elif variant == 'wiki':
- if not self.exists():
- raise NoPageError(self)
- if sentences:
- raise NotImplementedError(
- "'wiki' variant of extract method does not support "
- "'sencence' parameter")
-
- extract = self.text[:]
- if intro:
- pos = extract.find('\n=')
- if pos:
- extract = extract[:pos]
- if chars:
- extract = shorten(extract, chars, break_long_words=False,
- placeholder='…')
- else:
- raise ValueError(
- 'variant parameter must be "plain", "html" or "wiki", not "{}"'
- .format(variant))
-
- if not lines:
- return extract
-
- text_lines = []
- for i, text in enumerate(extract.splitlines(), start=1):
- text_lines += wrap(text, width=79) or ['']
- if i >= lines:
- break
-
- return '\n'.join(text_lines[:min(lines, len(text_lines))])
-
- def properties(self, force: bool = False) -> dict:
- """
- Return the properties of the page.
-
- :param force: force updating from the live site
- """
- if not hasattr(self, '_pageprops') or force:
- self._pageprops = {} # page may not have pageprops (see T56868)
- self.site.loadpageprops(self)
- return self._pageprops
-
- def defaultsort(self, force: bool = False) -> Optional[str]:
- """
- Extract value of the {{DEFAULTSORT:}} magic word from the page.
-
- :param force: force updating from the live site
- """
- return self.properties(force=force).get('defaultsort')
-
- def expand_text(
- self,
- force: bool = False,
- includecomments: bool = False
- ) -> str:
- """Return the page text with all templates and parser words expanded.
-
- :param force: force updating from the live site
- :param includecomments: Also strip comments if includecomments
- parameter is not True.
- """
- if not hasattr(self, '_expanded_text') or (
- self._expanded_text is None) or force:
- if not self.text:
- self._expanded_text = ''
- return ''
-
- self._expanded_text = self.site.expand_text(
- self.text,
- title=self.title(with_section=False),
- includecomments=includecomments)
- return self._expanded_text
-
- def userName(self) -> str:
- """Return name or IP address of last user to edit page."""
- return self.latest_revision.user
-
- def isIpEdit(self) -> bool:
- """Return True if last editor was unregistered."""
- return self.latest_revision.anon
-
- def lastNonBotUser(self) -> str:
- """
- Return name or IP address of last human/non-bot user to edit page.
-
- Determine the most recent human editor out of the last revisions.
- If it was not able to retrieve a human user, returns None.
-
- If the edit was done by a bot which is no longer flagged as 'bot',
- i.e. which is not returned by Site.botusers(), it will be returned
- as a non-bot edit.
- """
- if hasattr(self, '_lastNonBotUser'):
- return self._lastNonBotUser
-
- self._lastNonBotUser = None
- for entry in self.revisions():
- if entry.user and (not self.site.isBot(entry.user)):
- self._lastNonBotUser = entry.user
- break
-
- return self._lastNonBotUser
-
- def editTime(self):
- """Return timestamp of last revision to page.
-
- :rtype: pywikibot.Timestamp
- """
- return self.latest_revision.timestamp
-
- def exists(self) -> bool:
- """Return True if page exists on the wiki, even if it's a redirect.
-
- If the title includes a section, return False if this section isn't
- found.
- """
- with suppress(AttributeError):
- return self.pageid > 0
- raise InvalidPageError(self)
-
- @property
- def oldest_revision(self):
- """
- Return the first revision of this page.
-
- :rtype: :py:obj:`Revision`
- """
- return next(self.revisions(reverse=True, total=1))
-
- def isRedirectPage(self):
- """Return True if this is a redirect, False if not or not existing."""
- return self.site.page_isredirect(self)
-
- def isStaticRedirect(self, force: bool = False) -> bool:
- """Determine whether the page is a static redirect.
-
- A static redirect must be a valid redirect, and contain the magic
- word __STATICREDIRECT__.
-
- .. versionchanged:: 7.0
- __STATICREDIRECT__ can be transcluded
-
- :param force: Bypass local caching
- """
- return self.isRedirectPage() \
- and 'staticredirect' in self.properties(force=force)
-
- def isCategoryRedirect(self) -> bool:
- """Return True if this is a category redirect page, False otherwise."""
- if not self.is_categorypage():
- return False
-
- if not hasattr(self, '_catredirect'):
- self._catredirect = False
- catredirs = self.site.category_redirects()
- for template, args in self.templatesWithParams():
- if template.title(with_ns=False) not in catredirs:
- continue
-
- if args:
- # Get target (first template argument)
- target_title = args[0].strip()
- p = pywikibot.Page(
- self.site, target_title, Namespace.CATEGORY)
- try:
- p.title()
- except pywikibot.exceptions.InvalidTitleError:
- target_title = self.site.expand_text(
- text=target_title, title=self.title())
- p = pywikibot.Page(self.site, target_title,
- Namespace.CATEGORY)
- if p.namespace() == Namespace.CATEGORY:
- self._catredirect = p.title()
- else:
- pywikibot.warning(
- 'Category redirect target {} on {} is not a '
- 'category'.format(p.title(as_link=True),
- self.title(as_link=True)))
- else:
- pywikibot.warning(
- 'No target found for category redirect on '
- + self.title(as_link=True))
- break
-
- return bool(self._catredirect)
-
- def getCategoryRedirectTarget(self):
- """
- If this is a category redirect, return the target category title.
-
- :rtype: pywikibot.page.Category
- """
- if self.isCategoryRedirect():
- return Category(Link(self._catredirect, self.site))
- raise IsNotRedirectPageError(self)
-
- def isTalkPage(self):
- """Return True if this page is in any talk namespace."""
- ns = self.namespace()
- return ns >= 0 and ns % 2 == 1
-
- def toggleTalkPage(self):
- """
- Return other member of the article-talk page pair for this Page.
-
- If self is a talk page, returns the associated content page;
- otherwise, returns the associated talk page. The returned page need
- not actually exist on the wiki.
-
- :return: Page or None if self is a special page.
- :rtype: typing.Optional[pywikibot.Page]
- """
- ns = self.namespace()
- if ns < 0: # Special page
- return None
-
- title = self.title(with_ns=False)
- new_ns = ns + (1, -1)[self.isTalkPage()]
- return Page(self.site,
- '{}:{}'.format(self.site.namespace(new_ns), title))
-
- def is_categorypage(self):
- """Return True if the page is a Category, False otherwise."""
- return self.namespace() == 14
-
- def is_filepage(self):
- """Return True if this is a file description page, False otherwise."""
- return self.namespace() == 6
-
- def isDisambig(self) -> bool:
- """
- Return True if this is a disambiguation page, False otherwise.
-
- By default, it uses the Disambiguator extension's result. The
- identification relies on the presence of the __DISAMBIG__ magic word
- which may also be transcluded.
-
- If the Disambiguator extension isn't activated for the given site,
- the identification relies on the presence of specific templates.
- First load a list of template names from the Family file;
- if the value in the Family file is None or no entry was made, look for
- the list on [[MediaWiki:Disambiguationspage]]. If this page does not
- exist, take the MediaWiki message. 'Template:Disambig' is always
- assumed to be default, and will be appended regardless of its
- existence.
- """
- if self.site.has_extension('Disambiguator'):
- # If the Disambiguator extension is loaded, use it
- return 'disambiguation' in self.properties()
-
- if not hasattr(self.site, '_disambigtemplates'):
- try:
- default = set(self.site.family.disambig('_default'))
- except KeyError:
- default = {'Disambig'}
- try:
- distl = self.site.family.disambig(self.site.code,
- fallback=False)
- except KeyError:
- distl = None
- if distl is None:
- disambigpages = Page(self.site,
- 'MediaWiki:Disambiguationspage')
- if disambigpages.exists():
- disambigs = {link.title(with_ns=False)
- for link in disambigpages.linkedPages()
- if link.namespace() == 10}
- elif self.site.has_mediawiki_message('disambiguationspage'):
- message = self.site.mediawiki_message(
- 'disambiguationspage').split(':', 1)[1]
- # add the default template(s) for default mw message
- # only
- disambigs = {first_upper(message)} | default
- else:
- disambigs = default
- self.site._disambigtemplates = disambigs
- else:
- # Normalize template capitalization
- self.site._disambigtemplates = {first_upper(t) for t in distl}
- templates = {tl.title(with_ns=False) for tl in self.templates()}
- disambigs = set()
- # always use cached disambig templates
- disambigs.update(self.site._disambigtemplates)
- # see if any template on this page is in the set of disambigs
- disambig_in_page = disambigs.intersection(templates)
- return self.namespace() != 10 and bool(disambig_in_page)
-
- def getReferences(self,
- follow_redirects: bool = True,
- with_template_inclusion: bool = True,
- only_template_inclusion: bool = False,
- filter_redirects: bool = False,
- namespaces=None,
- total: Optional[int] = None,
- content: bool = False):
- """
- Return an iterator all pages that refer to or embed the page.
-
- If you need a full list of referring pages, use
- ``pages = list(s.getReferences())``
-
- :param follow_redirects: if True, also iterate pages that link to a
- redirect pointing to the page.
- :param with_template_inclusion: if True, also iterate pages where self
- is used as a template.
- :param only_template_inclusion: if True, only iterate pages where self
- is used as a template.
- :param filter_redirects: if True, only iterate redirects to self.
- :param namespaces: only iterate pages in these namespaces
- :param total: iterate no more than this number of pages in total
- :param content: if True, retrieve the content of the current version
- of each referring page (default False)
- :rtype: typing.Iterable[pywikibot.Page]
- """
- # N.B.: this method intentionally overlaps with backlinks() and
- # embeddedin(). Depending on the interface, it may be more efficient
- # to implement those methods in the site interface and then combine
- # the results for this method, or to implement this method and then
- # split up the results for the others.
- return self.site.pagereferences(
- self,
- follow_redirects=follow_redirects,
- filter_redirects=filter_redirects,
- with_template_inclusion=with_template_inclusion,
- only_template_inclusion=only_template_inclusion,
- namespaces=namespaces,
- total=total,
- content=content
- )
-
- def backlinks(self,
- follow_redirects: bool = True,
- filter_redirects: Optional[bool] = None,
- namespaces=None,
- total: Optional[int] = None,
- content: bool = False):
- """
- Return an iterator for pages that link to this page.
-
- :param follow_redirects: if True, also iterate pages that link to a
- redirect pointing to the page.
- :param filter_redirects: if True, only iterate redirects; if False,
- omit redirects; if None, do not filter
- :param namespaces: only iterate pages in these namespaces
- :param total: iterate no more than this number of pages in total
- :param content: if True, retrieve the content of the current version
- of each referring page (default False)
- """
- return self.site.pagebacklinks(
- self,
- follow_redirects=follow_redirects,
- filter_redirects=filter_redirects,
- namespaces=namespaces,
- total=total,
- content=content
- )
-
- def embeddedin(self,
- filter_redirects: Optional[bool] = None,
- namespaces=None,
- total: Optional[int] = None,
- content: bool = False):
- """
- Return an iterator for pages that embed this page as a template.
-
- :param filter_redirects: if True, only iterate redirects; if False,
- omit redirects; if None, do not filter
- :param namespaces: only iterate pages in these namespaces
- :param total: iterate no more than this number of pages in total
- :param content: if True, retrieve the content of the current version
- of each embedding page (default False)
- """
- return self.site.page_embeddedin(
- self,
- filter_redirects=filter_redirects,
- namespaces=namespaces,
- total=total,
- content=content
- )
-
- def redirects(
- self,
- *,
- filter_fragments: Optional[bool] = None,
- namespaces: NamespaceArgType = None,
- total: Optional[int] = None,
- content: bool = False
- ) -> 'Iterable[pywikibot.Page]':
- """
- Return an iterable of redirects to this page.
-
- :param filter_fragments: If True, only return redirects with fragments.
- If False, only return redirects without fragments. If None, return
- both (no filtering).
- :param namespaces: only return redirects from these namespaces
- :param total: maximum number of redirects to retrieve in total
- :param content: load the current content of each redirect
-
- .. versionadded:: 7.0
- """
- return self.site.page_redirects(
- self,
- filter_fragments=filter_fragments,
- namespaces=namespaces,
- total=total,
- content=content,
- )
-
- def protection(self) -> dict:
- """Return a dictionary reflecting page protections."""
- return self.site.page_restrictions(self)
-
- def applicable_protections(self) -> set:
- """
- Return the protection types allowed for that page.
-
- If the page doesn't exist it only returns "create". Otherwise it
- returns all protection types provided by the site, except "create".
- It also removes "upload" if that page is not in the File namespace.
-
- It is possible, that it returns an empty set, but only if original
- protection types were removed.
-
- :return: set of str
- """
- # New API since commit 32083235eb332c419df2063cf966b3400be7ee8a
- if self.site.mw_version >= '1.25wmf14':
- self.site.loadpageinfo(self)
- return self._applicable_protections
-
- p_types = set(self.site.protection_types())
- if not self.exists():
- return {'create'} if 'create' in p_types else set()
- p_types.remove('create') # no existing page allows that
- if not self.is_filepage(): # only file pages allow upload
- p_types.remove('upload')
- return p_types
-
- def has_permission(self, action: str = 'edit') -> bool:
- """Determine whether the page can be modified.
-
- Return True if the bot has the permission of needed restriction level
- for the given action type.
-
- :param action: a valid restriction type like 'edit', 'move'
- :raises ValueError: invalid action parameter
- """
- return self.site.page_can_be_edited(self, action)
-
- def botMayEdit(self) -> bool:
- """
- Determine whether the active bot is allowed to edit the page.
-
- This will be True if the page doesn't contain {{bots}} or {{nobots}}
- or any other template from edit_restricted_templates list
- in x_family.py file, or it contains them and the active bot is allowed
- to edit this page. (This method is only useful on those sites that
- recognize the bot-exclusion protocol; on other sites, it will always
- return True.)
-
- The framework enforces this restriction by default. It is possible
- to override this by setting ignore_bot_templates=True in
- user-config.py, or using page.put(force=True).
- """
- if not hasattr(self, '_bot_may_edit'):
- self._bot_may_edit = self._check_bot_may_edit()
- return self._bot_may_edit
-
- def _check_bot_may_edit(self, module: Optional[str] = None) -> bool:
- """A botMayEdit helper method.
-
- @param module: The module name to be restricted. Defaults to
- pywikibot.calledModuleName().
- """
- if not hasattr(self, 'templatesWithParams'):
- return True
-
- if config.ignore_bot_templates: # Check the "master ignore switch"
- return True
-
- username = self.site.username()
- try:
- templates = self.templatesWithParams()
- except (NoPageError, IsRedirectPageError, SectionError):
- return True
-
- # go through all templates and look for any restriction
- restrictions = set(self.site.get_edit_restricted_templates())
-
- if module is None:
- module = pywikibot.calledModuleName()
-
- # also add archive templates for non-archive bots
- if module != 'archivebot':
- restrictions.update(self.site.get_archived_page_templates())
-
- # multiple bots/nobots templates are allowed
- for template, params in templates:
- title = template.title(with_ns=False)
-
- if title in restrictions:
- return False
-
- if title not in ('Bots', 'Nobots'):
- continue
-
- try:
- key, sep, value = params[0].partition('=')
- except IndexError:
- key, sep, value = '', '', ''
- names = set()
- else:
- if not sep:
- key, value = value, key
- key = key.strip()
- names = {name.strip() for name in value.split(',')}
-
- if len(params) > 1:
- pywikibot.warning(
- '{{%s|%s}} has more than 1 parameter; taking the first.'
- % (title.lower(), '|'.join(params)))
-
- if title == 'Nobots':
- if not params:
- return False
-
- if key:
- pywikibot.error(
- '%s parameter for {{nobots}} is not allowed. '
- 'Edit declined' % key)
- return False
-
- if 'all' in names or module in names or username in names:
- return False
-
- if title == 'Bots':
- if value and not key:
- pywikibot.warning(
- '{{bots|%s}} is not valid. Ignoring.' % value)
- continue
-
- if key and not value:
- pywikibot.warning(
- '{{bots|%s=}} is not valid. Ignoring.' % key)
- continue
-
- if key == 'allow':
- if not ('all' in names or username in names):
- return False
-
- elif key == 'deny':
- if 'all' in names or username in names:
- return False
-
- elif key == 'allowscript':
- if not ('all' in names or module in names):
- return False
-
- elif key == 'denyscript':
- if 'all' in names or module in names:
- return False
-
- elif key: # ignore unrecognized keys with a warning
- pywikibot.warning(
- '{{bots|%s}} is not valid. Ignoring.' % params[0])
-
- # no restricting template found
- return True
-
- def save(self,
- summary: Optional[str] = None,
- watch: Optional[str] = None,
- minor: bool = True,
- botflag: Optional[bool] = None,
- force: bool = False,
- asynchronous: bool = False,
- callback=None,
- apply_cosmetic_changes: Optional[bool] = None,
- quiet: bool = False,
- **kwargs):
- """
- Save the current contents of page's text to the wiki.
-
- .. versionchanged:: 7.0
- boolean watch parameter is deprecated
-
- :param summary: The edit summary for the modification (optional, but
- most wikis strongly encourage its use)
- :param watch: Specify how the watchlist is affected by this edit, set
- to one of "watch", "unwatch", "preferences", "nochange":
- * watch: add the page to the watchlist
- * unwatch: remove the page from the watchlist
- * preferences: use the preference settings (Default)
- * nochange: don't change the watchlist
- If None (default), follow bot account's default settings
- :param minor: if True, mark this edit as minor
- :param botflag: if True, mark this edit as made by a bot (default:
- True if user has bot status, False if not)
- :param force: if True, ignore botMayEdit() setting
- :param asynchronous: if True, launch a separate thread to save
- asynchronously
- :param callback: a callable object that will be called after the
- page put operation. This object must take two arguments: (1) a
- Page object, and (2) an exception instance, which will be None
- if the page was saved successfully. The callback is intended for
- use by bots that need to keep track of which saves were
- successful.
- :param apply_cosmetic_changes: Overwrites the cosmetic_changes
- configuration value to this value unless it's None.
- :param quiet: enable/disable successful save operation message;
- defaults to False.
- In asynchronous mode, if True, it is up to the calling bot to
- manage the output e.g. via callback.
- """
- if not summary:
- summary = config.default_edit_summary
-
- if isinstance(watch, bool):
- issue_deprecation_warning(
- 'boolean watch parameter',
- '"watch", "unwatch", "preferences" or "nochange" value',
- since='7.0.0')
- watch = ('unwatch', 'watch')[watch]
-
- if not force and not self.botMayEdit():
- raise OtherPageSaveError(
- self, 'Editing restricted by {{bots}}, {{nobots}} '
- "or site's equivalent of {{in use}} template")
- self._save(summary=summary, watch=watch, minor=minor, botflag=botflag,
- asynchronous=asynchronous, callback=callback,
- cc=apply_cosmetic_changes, quiet=quiet, **kwargs)
-
- @allow_asynchronous
- def _save(self, summary=None, watch=None, minor: bool = True, botflag=None,
- cc=None, quiet: bool = False, **kwargs):
- """Helper function for save()."""
- link = self.title(as_link=True)
- if cc or (cc is None and config.cosmetic_changes):
- summary = self._cosmetic_changes_hook(summary)
-
- done = self.site.editpage(self, summary=summary, minor=minor,
- watch=watch, bot=botflag, **kwargs)
- if not done:
- if not quiet:
- pywikibot.warning('Page {} not saved'.format(link))
- raise PageSaveRelatedError(self)
- if not quiet:
- pywikibot.output('Page {} saved'.format(link))
-
- def _cosmetic_changes_hook(self, summary: str) -> str:
- """The cosmetic changes hook.
-
- :param summary: The current edit summary.
- :return: Modified edit summary if cosmetic changes has been done,
- else the old edit summary.
- """
- if self.isTalkPage() or self.content_model != 'wikitext' or \
- pywikibot.calledModuleName() in config.cosmetic_changes_deny_script:
- return summary
-
- # check if cosmetic_changes is enabled for this page
- family = self.site.family.name
- if config.cosmetic_changes_mylang_only:
- cc = ((family == config.family and self.site.lang == config.mylang)
- or self.site.lang in config.cosmetic_changes_enable.get(
- family, []))
- else:
- cc = True
- cc = cc and self.site.lang not in config.cosmetic_changes_disable.get(
- family, [])
- cc = cc and self._check_bot_may_edit('cosmetic_changes')
- if not cc:
- return summary
-
- old = self.text
- pywikibot.log('Cosmetic changes for {}-{} enabled.'
- .format(family, self.site.lang))
- # cc depends on page directly and via several other imports
- cc_toolkit = CosmeticChangesToolkit(self, ignore=CANCEL.MATCH)
- self.text = cc_toolkit.change(old)
-
- # i18n package changed in Pywikibot 7.0.0
- old_i18n = i18n.twtranslate(self.site, 'cosmetic_changes-append',
- fallback_prompt='; cosmetic changes')
- if summary and old.strip().replace(
- '\r\n', '\n') != self.text.strip().replace('\r\n', '\n'):
- summary += i18n.twtranslate(self.site,
- 'pywikibot-cosmetic-changes',
- fallback_prompt=old_i18n)
- return summary
-
- def put(self, newtext: str,
- summary: Optional[str] = None,
- watch: Optional[str] = None,
- minor: bool = True,
- botflag: Optional[bool] = None,
- force: bool = False,
- asynchronous: bool = False,
- callback=None,
- show_diff: bool = False,
- **kwargs) -> None:
- """
- Save the page with the contents of the first argument as the text.
-
- This method is maintained primarily for backwards-compatibility.
- For new code, using Page.save() is preferred. See save() method
- docs for all parameters not listed here.
-
- .. versionadded:: 7.0
- The `show_diff` parameter
-
- :param newtext: The complete text of the revised page.
- :param show_diff: show changes between oldtext and newtext
- (default: False)
- """
- if show_diff:
- pywikibot.showDiff(self.text, newtext)
- self.text = newtext
- self.save(summary=summary, watch=watch, minor=minor, botflag=botflag,
- force=force, asynchronous=asynchronous, callback=callback,
- **kwargs)
-
- def watch(self, unwatch: bool = False) -> bool:
- """
- Add or remove this page to/from bot account's watchlist.
-
- :param unwatch: True to unwatch, False (default) to watch.
- :return: True if successful, False otherwise.
- """
- return self.site.watch(self, unwatch)
-
- def clear_cache(self) -> None:
- """Clear the cached attributes of the page."""
- self._revisions = {}
- for attr in self._cache_attrs:
- with suppress(AttributeError):
- delattr(self, attr)
-
- def purge(self, **kwargs) -> bool:
- """
- Purge the server's cache for this page.
-
- :keyword redirects: Automatically resolve redirects.
- :type redirects: bool
- :keyword converttitles: Convert titles to other variants if necessary.
- Only works if the wiki's content language supports variant
- conversion.
- :type converttitles: bool
- :keyword forcelinkupdate: Update the links tables.
- :type forcelinkupdate: bool
- :keyword forcerecursivelinkupdate: Update the links table, and update
- the links tables for any page that uses this page as a template.
- :type forcerecursivelinkupdate: bool
- """
- self.clear_cache()
- return self.site.purgepages([self], **kwargs)
-
- def touch(self, callback=None, botflag: bool = False, **kwargs):
- """
- Make a touch edit for this page.
-
- See save() method docs for all parameters.
- The following parameters will be overridden by this method:
- - summary, watch, minor, force, asynchronous
-
- Parameter botflag is False by default.
-
- minor and botflag parameters are set to False which prevents hiding
- the edit when it becomes a real edit due to a bug.
-
- :note: This discards content saved to self.text.
- """
- if self.exists():
- # ensure always get the page text and not to change it.
- del self.text
- summary = i18n.twtranslate(self.site, 'pywikibot-touch')
- self.save(summary=summary, watch='nochange',
- minor=False, botflag=botflag, force=True,
- asynchronous=False, callback=callback,
- apply_cosmetic_changes=False, nocreate=True, **kwargs)
- else:
- raise NoPageError(self)
-
- def linkedPages(
- self, *args, **kwargs
- ) -> Generator['pywikibot.Page', None, None]:
- """Iterate Pages that this Page links to.
-
- Only returns pages from "normal" internal links. Embedded
- templates are omitted but links within them are returned. All
- interwiki and external links are omitted.
-
- For the parameters refer
- :py:mod:`APISite.pagelinks<pywikibot.site.APISite.pagelinks>`
-
- .. versionadded:: 7.0
- the `follow_redirects` keyword argument
- .. deprecated:: 7.0
- the positional arguments
-
- .. seealso:: https://www.mediawiki.org/wiki/API:Links
-
- :keyword namespaces: Only iterate pages in these namespaces
- (default: all)
- :type namespaces: iterable of str or Namespace key,
- or a single instance of those types. May be a '|' separated
- list of namespace identifiers.
- :keyword follow_redirects: if True, yields the target of any redirects,
- rather than the redirect page
- :keyword total: iterate no more than this number of pages in total
- :keyword content: if True, load the current content of each page
- """
- # Deprecate positional arguments and synchronize with Site.pagelinks
- keys = ('namespaces', 'total', 'content')
- for i, arg in enumerate(args):
- key = keys[i]
- issue_deprecation_warning(
- 'Positional argument {} ({})'.format(i + 1, arg),
- 'keyword argument "{}={}"'.format(key, arg),
- since='7.0.0')
- if key in kwargs:
- pywikibot.warning('{!r} is given as keyword argument {!r} '
- 'already; ignoring {!r}'
- .format(key, arg, kwargs[key]))
- else:
- kwargs[key] = arg
-
- return self.site.pagelinks(self, **kwargs)
-
- def interwiki(self, expand: bool = True):
- """
- Iterate interwiki links in the page text, excluding language links.
-
- :param expand: if True (default), include interwiki links found in
- templates transcluded onto this page; if False, only iterate
- interwiki links found in this page's own wikitext
- :return: a generator that yields Link objects
- :rtype: generator
- """
- # This function does not exist in the API, so it has to be
- # implemented by screen-scraping
- if expand:
- text = self.expand_text()
- else:
- text = self.text
- for linkmatch in pywikibot.link_regex.finditer(
- textlib.removeDisabledParts(text)):
- linktitle = linkmatch.group('title')
- link = Link(linktitle, self.site)
- # only yield links that are to a different site and that
- # are not language links
- try:
- if link.site != self.site:
- if linktitle.lstrip().startswith(':'):
- # initial ":" indicates not a language link
- yield link
- elif link.site.family != self.site.family:
- # link to a different family is not a language link
- yield link
- except Error:
- # ignore any links with invalid contents
- continue
-
- def langlinks(self, include_obsolete: bool = False) -> list:
- """
- Return a list of all inter-language Links on this page.
-
- :param include_obsolete: if true, return even Link objects whose site
- is obsolete
- :return: list of Link objects.
- """
- # Note: We preload a list of *all* langlinks, including links to
- # obsolete sites, and store that in self._langlinks. We then filter
- # this list if the method was called with include_obsolete=False
- # (which is the default)
- if not hasattr(self, '_langlinks'):
- self._langlinks = list(self.iterlanglinks(include_obsolete=True))
-
- if include_obsolete:
- return self._langlinks
- return [i for i in self._langlinks if not i.site.obsolete]
-
- def iterlanglinks(self,
- total: Optional[int] = None,
- include_obsolete: bool = False):
- """Iterate all inter-language links on this page.
-
- :param total: iterate no more than this number of pages in total
- :param include_obsolete: if true, yield even Link object whose site
- is obsolete
- :return: a generator that yields Link objects.
- :rtype: generator
- """
- if hasattr(self, '_langlinks'):
- return iter(self.langlinks(include_obsolete=include_obsolete))
- # XXX We might want to fill _langlinks when the Site
- # method is called. If we do this, we'll have to think
- # about what will happen if the generator is not completely
- # iterated upon.
- return self.site.pagelanglinks(self, total=total,
- include_obsolete=include_obsolete)
-
- def data_item(self):
- """
- Convenience function to get the Wikibase item of a page.
-
- :rtype: pywikibot.page.ItemPage
- """
- return pywikibot.ItemPage.fromPage(self)
-
- def templates(self, content: bool = False):
- """
- Return a list of Page objects for templates used on this Page.
-
- Template parameters are ignored. This method only returns embedded
- templates, not template pages that happen to be referenced through
- a normal link.
-
- :param content: if True, retrieve the content of the current version
- of each template (default False)
- :param content: bool
- """
- # Data might have been preloaded
- if not hasattr(self, '_templates'):
- self._templates = list(self.itertemplates(content=content))
-
- return self._templates
-
- def itertemplates(self,
- total: Optional[int] = None,
- content: bool = False):
- """
- Iterate Page objects for templates used on this Page.
-
- Template parameters are ignored. This method only returns embedded
- templates, not template pages that happen to be referenced through
- a normal link.
-
- :param total: iterate no more than this number of pages in total
- :param content: if True, retrieve the content of the current version
- of each template (default False)
- :param content: bool
- """
- if hasattr(self, '_templates'):
- return iter(self._templates)
- return self.site.pagetemplates(self, total=total, content=content)
-
- def imagelinks(self, total: Optional[int] = None, content: bool = False):
- """
- Iterate FilePage objects for images displayed on this Page.
-
- :param total: iterate no more than this number of pages in total
- :param content: if True, retrieve the content of the current version
- of each image description page (default False)
- :return: a generator that yields FilePage objects.
- """
- return self.site.pageimages(self, total=total, content=content)
-
- def categories(self,
- with_sort_key: bool = False,
- total: Optional[int] = None,
- content: bool = False):
- """
- Iterate categories that the article is in.
-
- :param with_sort_key: if True, include the sort key in each Category.
- :param total: iterate no more than this number of pages in total
- :param content: if True, retrieve the content of the current version
- of each category description page (default False)
- :return: a generator that yields Category objects.
- :rtype: generator
- """
- # FIXME: bug T75561: with_sort_key is ignored by Site.pagecategories
- if with_sort_key:
- raise NotImplementedError('with_sort_key is not implemented')
-
- return self.site.pagecategories(self, total=total, content=content)
-
- def extlinks(self, total: Optional[int] = None):
- """
- Iterate all external URLs (not interwiki links) from this page.
-
- :param total: iterate no more than this number of pages in total
- :return: a generator that yields str objects containing URLs.
- :rtype: generator
- """
- return self.site.page_extlinks(self, total=total)
-
- def coordinates(self, primary_only: bool = False):
- """
- Return a list of Coordinate objects for points on the page.
-
- Uses the MediaWiki extension GeoData.
-
- :param primary_only: Only return the coordinate indicated to be primary
- :return: A list of Coordinate objects or a single Coordinate if
- primary_only is True
- :rtype: list of Coordinate or Coordinate or None
- """
- if not hasattr(self, '_coords'):
- self._coords = []
- self.site.loadcoordinfo(self)
- if primary_only:
- for coord in self._coords:
- if coord.primary:
- return coord
- return None
- return list(self._coords)
-
- def page_image(self):
- """
- Return the most appropriate image on the page.
-
- Uses the MediaWiki extension PageImages.
-
- :return: A FilePage object
- :rtype: pywikibot.page.FilePage
- """
- if not hasattr(self, '_pageimage'):
- self._pageimage = None
- self.site.loadpageimage(self)
-
- return self._pageimage
-
- def getRedirectTarget(self):
- """
- Return a Page object for the target this Page redirects to.
-
- If this page is not a redirect page, will raise an
- IsNotRedirectPageError. This method also can raise a NoPageError.
-
- :rtype: pywikibot.Page
- """
- return self.site.getredirtarget(self)
-
- def moved_target(self):
- """
- Return a Page object for the target this Page was moved to.
-
- If this page was not moved, it will raise a NoMoveTargetError.
- This method also works if the source was already deleted.
-
- :rtype: pywikibot.page.Page
- :raises pywikibot.exceptions.NoMoveTargetError: page was not moved
- """
- gen = iter(self.site.logevents(logtype='move', page=self, total=1))
- try:
- lastmove = next(gen)
- except StopIteration:
- raise NoMoveTargetError(self)
- else:
- return lastmove.target_page
-
- def revisions(self,
- reverse: bool = False,
- total: Optional[int] = None,
- content: bool = False,
- starttime=None, endtime=None):
- """Generator which loads the version history as Revision instances."""
- # TODO: Only request uncached revisions
- self.site.loadrevisions(self, content=content, rvdir=reverse,
- starttime=starttime, endtime=endtime,
- total=total)
- return (self._revisions[rev] for rev in
- sorted(self._revisions, reverse=not reverse)[:total])
-
- def getVersionHistoryTable(self,
- reverse: bool = False,
- total: Optional[int] = None):
- """Return the version history as a wiki table."""
- result = '{| class="wikitable"\n'
- result += '! oldid || date/time || username || edit summary\n'
- for entry in self.revisions(reverse=reverse, total=total):
- result += '|----\n'
- result += ('| {r.revid} || {r.timestamp} || {r.user} || '
- '<nowiki>{r.comment}</nowiki>\n'.format(r=entry))
- result += '|}\n'
- return result
-
- def contributors(self,
- total: Optional[int] = None,
- starttime=None, endtime=None):
- """
- Compile contributors of this page with edit counts.
-
- :param total: iterate no more than this number of revisions in total
- :param starttime: retrieve revisions starting at this Timestamp
- :param endtime: retrieve revisions ending at this Timestamp
-
- :return: number of edits for each username
- :rtype: :py:obj:`collections.Counter`
- """
- return Counter(rev.user for rev in
- self.revisions(total=total,
- starttime=starttime, endtime=endtime))
-
- def revision_count(self, contributors=None) -> int:
- """Determine number of edits from contributors.
-
- :param contributors: contributor usernames
- :type contributors: iterable of str or pywikibot.User,
- a single pywikibot.User, a str or None
- :return: number of edits for all provided usernames
- """
- cnt = self.contributors()
-
- if not contributors:
- return sum(cnt.values())
-
- if isinstance(contributors, User):
- contributors = contributors.username
-
- if isinstance(contributors, str):
- return cnt[contributors]
-
- return sum(cnt[user.username] if isinstance(user, User) else cnt[user]
- for user in contributors)
-
- def merge_history(self, dest, timestamp=None, reason=None) -> None:
- """
- Merge revisions from this page into another page.
-
- See :py:obj:`APISite.merge_history` for details.
-
- :param dest: Destination page to which revisions will be merged
- :type dest: pywikibot.Page
- :param timestamp: Revisions from this page dating up to this timestamp
- will be merged into the destination page (if not given or False,
- all revisions will be merged)
- :type timestamp: pywikibot.Timestamp
- :param reason: Optional reason for the history merge
- :type reason: str
- """
- self.site.merge_history(self, dest, timestamp, reason)
-
- def move(self,
- newtitle: str,
- reason: Optional[str] = None,
- movetalk: bool = True,
- noredirect: bool = False,
- movesubpages: bool = True) -> None:
- """
- Move this page to a new title.
-
- .. versionchanged:: 7.2
- The `movesubpages` parameter was added
-
- :param newtitle: The new page title.
- :param reason: The edit summary for the move.
- :param movetalk: If true, move this page's talk page (if it exists)
- :param noredirect: if move succeeds, delete the old page
- (usually requires sysop privileges, depending on wiki settings)
- :param movesubpages: Rename subpages, if applicable.
- """
- if reason is None:
- pywikibot.output('Moving {} to [[{}]].'
- .format(self.title(as_link=True), newtitle))
- reason = pywikibot.input('Please enter a reason for the move:')
- return self.site.movepage(self, newtitle, reason,
- movetalk=movetalk,
- noredirect=noredirect,
- movesubpages=movesubpages)
-
- def delete(
- self,
- reason: Optional[str] = None,
- prompt: bool = True,
- mark: bool = False,
- automatic_quit: bool = False,
- *,
- deletetalk: bool = False
- ) -> None:
- """
- Delete the page from the wiki. Requires administrator status.
-
- .. versionchanged:: 7.1
- keyword only parameter *deletetalk* was added.
-
- :param reason: The edit summary for the deletion, or rationale
- for deletion if requesting. If None, ask for it.
- :param deletetalk: Also delete the talk page, if it exists.
- :param prompt: If true, prompt user for confirmation before deleting.
- :param mark: If true, and user does not have sysop rights, place a
- speedy-deletion request on the page instead. If false, non-sysops
- will be asked before marking pages for deletion.
- :param automatic_quit: show also the quit option, when asking
- for confirmation.
- """
- if reason is None:
- pywikibot.output('Deleting {}.'.format(self.title(as_link=True)))
- reason = pywikibot.input('Please enter a reason for the deletion:')
-
- # If user has 'delete' right, delete the page
- if self.site.has_right('delete'):
- answer = 'y'
- if prompt and not hasattr(self.site, '_noDeletePrompt'):
- answer = pywikibot.input_choice(
- 'Do you want to delete {}?'.format(self.title(
- as_link=True, force_interwiki=True)),
- [('Yes', 'y'), ('No', 'n'), ('All', 'a')],
- 'n', automatic_quit=automatic_quit)
- if answer == 'a':
- answer = 'y'
- self.site._noDeletePrompt = True
- if answer == 'y':
- self.site.delete(self, reason, deletetalk=deletetalk)
- return
-
- # Otherwise mark it for deletion
- if mark or hasattr(self.site, '_noMarkDeletePrompt'):
- answer = 'y'
- else:
- answer = pywikibot.input_choice(
- "Can't delete {}; do you want to mark it for deletion instead?"
- .format(self),
- [('Yes', 'y'), ('No', 'n'), ('All', 'a')],
- 'n', automatic_quit=False)
- if answer == 'a':
- answer = 'y'
- self.site._noMarkDeletePrompt = True
- if answer == 'y':
- template = '{{delete|1=%s}}\n' % reason
- # We can't add templates in a wikidata item, so let's use its
- # talk page
- if isinstance(self, pywikibot.ItemPage):
- target = self.toggleTalkPage()
- else:
- target = self
- target.text = template + target.text
- target.save(summary=reason)
-
- def has_deleted_revisions(self) -> bool:
- """Return True if the page has deleted revisions.
-
- .. versionadded:: 4.2
- """
- if not hasattr(self, '_has_deleted_revisions'):
- gen = self.site.deletedrevs(self, total=1, prop=['ids'])
- self._has_deleted_revisions = bool(list(gen))
- return self._has_deleted_revisions
-
- def loadDeletedRevisions(self, total: Optional[int] = None, **kwargs):
- """
- Retrieve deleted revisions for this Page.
-
- Stores all revisions' timestamps, dates, editors and comments in
- self._deletedRevs attribute.
-
- :return: iterator of timestamps (which can be used to retrieve
- revisions later on).
- :rtype: generator
- """
- if not hasattr(self, '_deletedRevs'):
- self._deletedRevs = {}
- for item in self.site.deletedrevs(self, total=total, **kwargs):
- for rev in item.get('revisions', []):
- self._deletedRevs[rev['timestamp']] = rev
- yield rev['timestamp']
-
- def getDeletedRevision(
- self,
- timestamp,
- content: bool = False,
- **kwargs
- ) -> List:
- """
- Return a particular deleted revision by timestamp.
-
- :return: a list of [date, editor, comment, text, restoration
- marker]. text will be None, unless content is True (or has
- been retrieved earlier). If timestamp is not found, returns
- empty list.
- """
- if hasattr(self, '_deletedRevs'):
- if timestamp in self._deletedRevs and (
- not content
- or 'content' in self._deletedRevs[timestamp]):
- return self._deletedRevs[timestamp]
-
- for item in self.site.deletedrevs(self, start=timestamp,
- content=content, total=1, **kwargs):
- # should only be one item with one revision
- if item['title'] == self.title() and 'revisions' in item:
- return item['revisions'][0]
- return []
-
- def markDeletedRevision(self, timestamp, undelete: bool = True):
- """
- Mark the revision identified by timestamp for undeletion.
-
- :param undelete: if False, mark the revision to remain deleted.
- """
- if not hasattr(self, '_deletedRevs'):
- self.loadDeletedRevisions()
- if timestamp not in self._deletedRevs:
- raise ValueError(
- 'Timestamp {} is not a deleted revision'
- .format(timestamp))
- self._deletedRevs[timestamp]['marked'] = undelete
-
- def undelete(self, reason: Optional[str] = None) -> None:
- """
- Undelete revisions based on the markers set by previous calls.
-
- If no calls have been made since loadDeletedRevisions(), everything
- will be restored.
-
- Simplest case::
-
- Page(...).undelete('This will restore all revisions')
-
- More complex::
-
- pg = Page(...)
- revs = pg.loadDeletedRevisions()
- for rev in revs:
- if ... #decide whether to undelete a revision
- pg.markDeletedRevision(rev) #mark for undeletion
- pg.undelete('This will restore only selected revisions.')
-
- :param reason: Reason for the action.
- """
- if hasattr(self, '_deletedRevs'):
- undelete_revs = [ts for ts, rev in self._deletedRevs.items()
- if 'marked' in rev and rev['marked']]
- else:
- undelete_revs = []
- if reason is None:
- warn('Not passing a reason for undelete() is deprecated.',
- DeprecationWarning)
- pywikibot.output('Undeleting {}.'.format(self.title(as_link=True)))
- reason = pywikibot.input(
- 'Please enter a reason for the undeletion:')
- self.site.undelete(self, reason, revision=undelete_revs)
-
- def protect(self,
- reason: Optional[str] = None,
- protections: Optional[dict] = None,
- **kwargs) -> None:
- """
- Protect or unprotect a wiki page. Requires administrator status.
-
- Valid protection levels are '' (equivalent to 'none'),
- 'autoconfirmed', 'sysop' and 'all'. 'all' means 'everyone is allowed',
- i.e. that protection type will be unprotected.
-
- In order to unprotect a type of permission, the protection level shall
- be either set to 'all' or '' or skipped in the protections dictionary.
-
- Expiry of protections can be set via kwargs, see Site.protect() for
- details. By default there is no expiry for the protection types.
-
- :param protections: A dict mapping type of protection to protection
- level of that type. Allowed protection types for a page can be
- retrieved by Page.self.applicable_protections()
- Defaults to protections is None, which means unprotect all
- protection types.
- Example: {'move': 'sysop', 'edit': 'autoconfirmed'}
-
- :param reason: Reason for the action, default is None and will set an
- empty string.
- """
- protections = protections or {} # protections is converted to {}
- reason = reason or '' # None is converted to ''
-
- self.site.protect(self, protections, reason, **kwargs)
-
- def change_category(self, old_cat, new_cat,
- summary: Optional[str] = None,
- sort_key=None,
- in_place: bool = True,
- include: Optional[List[str]] = None,
- show_diff: bool = False) -> bool:
- """
- Remove page from oldCat and add it to newCat.
-
- .. versionadded:: 7.0
- The `show_diff` parameter
-
- :param old_cat: category to be removed
- :type old_cat: pywikibot.page.Category
- :param new_cat: category to be added, if any
- :type new_cat: pywikibot.page.Category or None
-
- :param summary: string to use as an edit summary
-
- :param sort_key: sortKey to use for the added category.
- Unused if newCat is None, or if inPlace=True
- If sortKey=True, the sortKey used for oldCat will be used.
-
- :param in_place: if True, change categories in place rather than
- rearranging them.
-
- :param include: list of tags not to be disabled by default in relevant
- textlib functions, where CategoryLinks can be searched.
- :param show_diff: show changes between oldtext and newtext
- (default: False)
-
- :return: True if page was saved changed, otherwise False.
- """
- # get list of Category objects the article is in and remove possible
- # duplicates
- cats = []
- for cat in textlib.getCategoryLinks(self.text, site=self.site,
- include=include or []):
- if cat not in cats:
- cats.append(cat)
-
- if not self.has_permission():
- pywikibot.output("Can't edit {}, skipping it..."
- .format(self.title(as_link=True)))
- return False
-
- if old_cat not in cats:
- if self.namespace() != 10:
- pywikibot.error('{} is not in category {}!'
- .format(self.title(as_link=True),
- old_cat.title()))
- else:
- pywikibot.output('{} is not in category {}, skipping...'
- .format(self.title(as_link=True),
- old_cat.title()))
- return False
-
- # This prevents the bot from adding new_cat if it is already present.
- if new_cat in cats:
- new_cat = None
-
- oldtext = self.text
- if in_place or self.namespace() == 10:
- newtext = textlib.replaceCategoryInPlace(oldtext, old_cat, new_cat,
- site=self.site)
- else:
- old_cat_pos = cats.index(old_cat)
- if new_cat:
- if sort_key is True:
- # Fetch sort_key from old_cat in current page.
- sort_key = cats[old_cat_pos].sortKey
- cats[old_cat_pos] = Category(self.site, new_cat.title(),
- sort_key=sort_key)
- else:
- cats.pop(old_cat_pos)
-
- try:
- newtext = textlib.replaceCategoryLinks(oldtext, cats)
- except ValueError:
- # Make sure that the only way replaceCategoryLinks() can return
- # a ValueError is in the case of interwiki links to self.
- pywikibot.output('Skipping {} because of interwiki link to '
- 'self'.format(self.title()))
- return False
-
- if oldtext != newtext:
- try:
- self.put(newtext, summary, show_diff=show_diff)
- except PageSaveRelatedError as error:
- pywikibot.output('Page {} not saved: {}'
- .format(self.title(as_link=True), error))
- except NoUsernameError:
- pywikibot.output('Page {} not saved; sysop privileges '
- 'required.'.format(self.title(as_link=True)))
- else:
- return True
-
- return False
-
- def is_flow_page(self) -> bool:
- """Whether a page is a Flow page."""
- return self.content_model == 'flow-board'
-
- def create_short_link(self,
- permalink: bool = False,
- with_protocol: bool = True) -> str:
- """
- Return a shortened link that points to that page.
-
- If shared_urlshortner_wiki is defined in family config, it'll use
- that site to create the link instead of the current wiki.
-
- :param permalink: If true, the link will point to the actual revision
- of the page.
- :param with_protocol: If true, and if it's not already included,
- the link will have http(s) protocol prepended. On Wikimedia wikis
- the protocol is already present.
- :return: The reduced link.
- """
- wiki = self.site
- if self.site.family.shared_urlshortner_wiki:
- wiki = pywikibot.Site(*self.site.family.shared_urlshortner_wiki)
-
- url = self.permalink() if permalink else self.full_url()
-
- link = wiki.create_short_link(url)
- if re.match(PROTOCOL_REGEX, link):
- if not with_protocol:
- return re.sub(PROTOCOL_REGEX, '', link)
- elif with_protocol:
- return '{}://{}'.format(wiki.protocol(), link)
- return link
-
-
-class Page(BasePage):
-
- """Page: A MediaWiki page."""
-
- def __init__(self, source, title: str = '', ns=0) -> None:
- """Instantiate a Page object."""
- if isinstance(source, pywikibot.site.BaseSite) and not title:
- raise ValueError('Title must be specified and not empty '
- 'if source is a Site.')
- super().__init__(source, title, ns)
-
- @property
- def raw_extracted_templates(self):
- """
- Extract templates using :py:obj:`textlib.extract_templates_and_params`.
-
- Disabled parts and whitespace are stripped, except for
- whitespace in anonymous positional arguments.
-
- This value is cached.
-
- :rtype: list of (str, OrderedDict)
- """
- if not hasattr(self, '_raw_extracted_templates'):
- templates = textlib.extract_templates_and_params(
- self.text, True, True)
- self._raw_extracted_templates = templates
-
- return self._raw_extracted_templates
-
- def templatesWithParams(self):
- """
- Return templates used on this Page.
-
- The templates are extracted by
- :py:obj:`textlib.extract_templates_and_params`, with positional
- arguments placed first in order, and each named argument
- appearing as 'name=value'.
-
- All parameter keys and values for each template are stripped of
- whitespace.
-
- :return: a list of tuples with one tuple for each template invocation
- in the page, with the template Page as the first entry and a list
- of parameters as the second entry.
- :rtype: list of (pywikibot.page.Page, list)
- """
- # WARNING: may not return all templates used in particularly
- # intricate cases such as template substitution
- titles = {t.title() for t in self.templates()}
- templates = self.raw_extracted_templates
- # backwards-compatibility: convert the dict returned as the second
- # element into a list in the format used by old scripts
- result = []
- for template in templates:
- try:
- link = pywikibot.Link(template[0], self.site,
- default_namespace=10)
- if link.canonical_title() not in titles:
- continue
- except Error:
- # this is a parser function or magic word, not template name
- # the template name might also contain invalid parts
- continue
- args = template[1]
- intkeys = {}
- named = {}
- positional = []
- for key in sorted(args):
- try:
- intkeys[int(key)] = args[key]
- except ValueError:
- named[key] = args[key]
- for i in range(1, len(intkeys) + 1):
- # only those args with consecutive integer keys can be
- # treated as positional; an integer could also be used
- # (out of order) as the key for a named argument
- # example: {{tmp|one|two|5=five|three}}
- if i in intkeys:
- positional.append(intkeys[i])
- else:
- for k in intkeys:
- if k < 1 or k >= i:
- named[str(k)] = intkeys[k]
- break
- for item in named.items():
- positional.append('{}={}'.format(*item))
- result.append((pywikibot.Page(link, self.site), positional))
- return result
-
- def set_redirect_target(
- self,
- target_page,
- create: bool = False,
- force: bool = False,
- keep_section: bool = False,
- save: bool = True,
- **kwargs
- ):
- """
- Change the page's text to point to the redirect page.
-
- :param target_page: target of the redirect, this argument is required.
- :type target_page: pywikibot.Page or string
- :param create: if true, it creates the redirect even if the page
- doesn't exist.
- :param force: if true, it set the redirect target even the page
- doesn't exist or it's not redirect.
- :param keep_section: if the old redirect links to a section
- and the new one doesn't it uses the old redirect's section.
- :param save: if true, it saves the page immediately.
- :param kwargs: Arguments which are used for saving the page directly
- afterwards, like 'summary' for edit summary.
- """
- if isinstance(target_page, str):
- target_page = pywikibot.Page(self.site, target_page)
- elif self.site != target_page.site:
- raise InterwikiRedirectPageError(self, target_page)
- if not self.exists() and not (create or force):
- raise NoPageError(self)
- if self.exists() and not self.isRedirectPage() and not force:
- raise IsNotRedirectPageError(self)
- redirect_regex = self.site.redirect_regex
- if self.exists():
- old_text = self.get(get_redirect=True)
- else:
- old_text = ''
- result = redirect_regex.search(old_text)
- if result:
- oldlink = result.group(1)
- if (keep_section and '#' in oldlink
- and target_page.section() is None):
- sectionlink = oldlink[oldlink.index('#'):]
- target_page = pywikibot.Page(
- self.site,
- target_page.title() + sectionlink
- )
- prefix = self.text[:result.start()]
- suffix = self.text[result.end():]
- else:
- prefix = ''
- suffix = ''
-
- target_link = target_page.title(as_link=True, textlink=True,
- allow_interwiki=False)
- target_link = '#{} {}'.format(self.site.redirect(), target_link)
- self.text = prefix + target_link + suffix
- if save:
- self.save(**kwargs)
-
- def get_best_claim(self, prop: str):
- """
- Return the first best Claim for this page.
-
- Return the first 'preferred' ranked Claim specified by Wikibase
- property or the first 'normal' one otherwise.
-
- .. versionadded:: 3.0
-
- :param prop: property id, "P###"
- :return: Claim object given by Wikibase property number
- for this page object.
- :rtype: pywikibot.Claim or None
-
- :raises UnknownExtensionError: site has no Wikibase extension
- """
- def find_best_claim(claims):
- """Find the first best ranked claim."""
- index = None
- for i, claim in enumerate(claims):
- if claim.rank == 'preferred':
- return claim
- if index is None and claim.rank == 'normal':
- index = i
- if index is None:
- index = 0
- return claims[index]
-
- if not self.site.has_data_repository:
- raise UnknownExtensionError(
- 'Wikibase is not implemented for {}.'.format(self.site))
-
- def get_item_page(func, *args):
- try:
- item_p = func(*args)
- item_p.get()
- return item_p
- except NoPageError:
- return None
- except IsRedirectPageError:
- return get_item_page(item_p.getRedirectTarget)
-
- item_page = get_item_page(pywikibot.ItemPage.fromPage, self)
- if item_page and prop in item_page.claims:
- return find_best_claim(item_page.claims[prop])
- return None
-
-
-class Category(Page):
-
- """A page in the Category: namespace."""
-
- def __init__(self, source, title: str = '', sort_key=None) -> None:
- """
- Initializer.
-
- All parameters are the same as for Page() Initializer.
- """
- self.sortKey = sort_key
- super().__init__(source, title, ns=14)
- if self.namespace() != 14:
- raise ValueError("'{}' is not in the category namespace!"
- .format(self.title()))
-
- def aslink(self, sort_key: Optional[str] = None) -> str:
- """
- Return a link to place a page in this Category.
-
- Use this only to generate a "true" category link, not for interwikis
- or text links to category pages.
-
- :param sort_key: The sort key for the article to be placed in this
- Category; if omitted, default sort key is used.
- """
- key = sort_key or self.sortKey
- if key is not None:
- title_with_sort_key = self.title(with_section=False) + '|' + key
- else:
- title_with_sort_key = self.title(with_section=False)
- return '[[{}]]'.format(title_with_sort_key)
-
- def subcategories(self,
- recurse: Union[int, bool] = False,
- total: Optional[int] = None,
- content: bool = False):
- """
- Iterate all subcategories of the current category.
-
- :param recurse: if not False or 0, also iterate subcategories of
- subcategories. If an int, limit recursion to this number of
- levels. (Example: recurse=1 will iterate direct subcats and
- first-level sub-sub-cats, but no deeper.)
- :param total: iterate no more than this number of
- subcategories in total (at all levels)
- :param content: if True, retrieve the content of the current version
- of each category description page (default False)
- """
- if not isinstance(recurse, bool) and recurse:
- recurse = recurse - 1
- if not hasattr(self, '_subcats'):
- self._subcats = []
- for member in self.site.categorymembers(
- self, member_type='subcat', total=total, content=content):
- subcat = Category(member)
- self._subcats.append(subcat)
- yield subcat
- if total is not None:
- total -= 1
- if total == 0:
- return
- if recurse:
- for item in subcat.subcategories(
- recurse, total=total, content=content):
- yield item
- if total is not None:
- total -= 1
- if total == 0:
- return
- else:
- for subcat in self._subcats:
- yield subcat
- if total is not None:
- total -= 1
- if total == 0:
- return
- if recurse:
- for item in subcat.subcategories(
- recurse, total=total, content=content):
- yield item
- if total is not None:
- total -= 1
- if total == 0:
- return
-
- def articles(self,
- recurse: Union[int, bool] = False,
- total: Optional[int] = None,
- content: bool = False,
- namespaces: Union[int, List[int]] = None,
- sortby: Optional[str] = None,
- reverse: bool = False,
- starttime=None, endtime=None,
- startprefix: Optional[str] = None,
- endprefix: Optional[str] = None):
- """
- Yield all articles in the current category.
-
- By default, yields all *pages* in the category that are not
- subcategories!
-
- :param recurse: if not False or 0, also iterate articles in
- subcategories. If an int, limit recursion to this number of
- levels. (Example: recurse=1 will iterate articles in first-level
- subcats, but no deeper.)
- :param total: iterate no more than this number of pages in
- total (at all levels)
- :param namespaces: only yield pages in the specified namespaces
- :param content: if True, retrieve the content of the current version
- of each page (default False)
- :param sortby: determines the order in which results are generated,
- valid values are "sortkey" (default, results ordered by category
- sort key) or "timestamp" (results ordered by time page was
- added to the category). This applies recursively.
- :param reverse: if True, generate results in reverse order
- (default False)
- :param starttime: if provided, only generate pages added after this
- time; not valid unless sortby="timestamp"
- :type starttime: pywikibot.Timestamp
- :param endtime: if provided, only generate pages added before this
- time; not valid unless sortby="timestamp"
- :type endtime: pywikibot.Timestamp
- :param startprefix: if provided, only generate pages >= this title
- lexically; not valid if sortby="timestamp"
- :param endprefix: if provided, only generate pages < this title
- lexically; not valid if sortby="timestamp"
- :rtype: typing.Iterable[pywikibot.Page]
- """
- seen = set()
- for member in self.site.categorymembers(self,
- namespaces=namespaces,
- total=total,
- content=content,
- sortby=sortby,
- reverse=reverse,
- starttime=starttime,
- endtime=endtime,
- startprefix=startprefix,
- endprefix=endprefix,
- member_type=['page', 'file']):
- if recurse:
- seen.add(hash(member))
- yield member
- if total is not None:
- total -= 1
- if total == 0:
- return
-
- if recurse:
- if not isinstance(recurse, bool) and recurse:
- recurse -= 1
- for subcat in self.subcategories():
- for article in subcat.articles(recurse=recurse,
- total=total,
- content=content,
- namespaces=namespaces,
- sortby=sortby,
- reverse=reverse,
- starttime=starttime,
- endtime=endtime,
- startprefix=startprefix,
- endprefix=endprefix):
- hash_value = hash(article)
- if hash_value in seen:
- continue
- seen.add(hash_value)
- yield article
- if total is not None:
- total -= 1
- if total == 0:
- return
-
- def members(self, recurse: bool = False,
- namespaces=None,
- total: Optional[int] = None,
- content: bool = False):
- """Yield all category contents (subcats, pages, and files).
-
- :rtype: typing.Iterable[pywikibot.Page]
- """
- for member in self.site.categorymembers(
- self, namespaces=namespaces, total=total, content=content):
- yield member
- if total is not None:
- total -= 1
- if total == 0:
- return
- if recurse:
- if not isinstance(recurse, bool) and recurse:
- recurse = recurse - 1
- for subcat in self.subcategories():
- for article in subcat.members(
- recurse, namespaces, total=total, content=content):
- yield article
- if total is not None:
- total -= 1
- if total == 0:
- return
-
- def isEmptyCategory(self) -> bool:
- """Return True if category has no members (including subcategories)."""
- ci = self.categoryinfo
- return sum(ci[k] for k in ['files', 'pages', 'subcats']) == 0
-
- def isHiddenCategory(self) -> bool:
- """Return True if the category is hidden."""
- return 'hiddencat' in self.properties()
-
- @property
- def categoryinfo(self) -> dict:
- """
- Return a dict containing information about the category.
-
- The dict contains values for:
-
- Numbers of pages, subcategories, files, and total contents.
- """
- return self.site.categoryinfo(self)
-
- def newest_pages(self, total=None):
- """
- Return pages in a category ordered by the creation date.
-
- If two or more pages are created at the same time, the pages are
- returned in the order they were added to the category. The most
- recently added page is returned first.
-
- It only allows to return the pages ordered from newest to oldest, as it
- is impossible to determine the oldest page in a category without
- checking all pages. But it is possible to check the category in order
- with the newly added first and it yields all pages which were created
- after the currently checked page was added (and thus there is no page
- created after any of the cached but added before the currently
- checked).
-
- :param total: The total number of pages queried.
- :type total: int
- :return: A page generator of all pages in a category ordered by the
- creation date. From newest to oldest. Note: It currently only
- returns Page instances and not a subclass of it if possible. This
- might change so don't expect to only get Page instances.
- :rtype: generator
- """
- def check_cache(latest):
- """Return the cached pages in order and not more than total."""
- cached = []
- for timestamp in sorted((ts for ts in cache if ts > latest),
- reverse=True):
- # The complete list can be removed, it'll either yield all of
- # them, or only a portion but will skip the rest anyway
- cached += cache.pop(timestamp)[:None if total is None else
- total - len(cached)]
- if total and len(cached) >= total:
- break # already got enough
- assert total is None or len(cached) <= total, \
- 'Number of caches is more than total number requested'
- return cached
-
- # all pages which have been checked but where created before the
- # current page was added, at some point they will be created after
- # the current page was added. It saves all pages via the creation
- # timestamp. Be prepared for multiple pages.
- cache = defaultdict(list)
- # TODO: Make site.categorymembers is usable as it returns pages
- # There is no total defined, as it's not known how many pages need to
- # be checked before the total amount of new pages was found. In worst
- # case all pages of a category need to be checked.
- for member in pywikibot.data.api.QueryGenerator(
- site=self.site, parameters={
- 'list': 'categorymembers', 'cmsort': 'timestamp',
- 'cmdir': 'older', 'cmprop': 'timestamp|title',
- 'cmtitle': self.title()}):
- # TODO: Upcast to suitable class
- page = pywikibot.Page(self.site, member['title'])
- assert page.namespace() == member['ns'], \
- 'Namespace of the page is not consistent'
- cached = check_cache(pywikibot.Timestamp.fromISOformat(
- member['timestamp']))
- yield from cached
- if total is not None:
- total -= len(cached)
- if total <= 0:
- break
- cache[page.oldest_revision.timestamp] += [page]
- else:
- # clear cache
- assert total is None or total > 0, \
- 'As many items as given in total already returned'
- yield from check_cache(pywikibot.Timestamp.min)
+__all__ = ('User', )
class User(Page):
@@ -2668,7 +66,7 @@
return '#' + self.title(with_ns=False)
return self.title(with_ns=False)
- def isRegistered(self, force: bool = False) -> bool:
+ def isRegistered(self, force: bool = False) -> bool: # noqa: N802
"""
Determine if the user is registered on the site.
@@ -2684,7 +82,7 @@
return (not self.isAnonymous()
and 'registration' in self.getprops(force))
- def isAnonymous(self) -> bool:
+ def isAnonymous(self) -> bool: # noqa: N802
"""Determine if the user is editing as an IP address."""
return is_ip_address(self.username)
@@ -2705,12 +103,12 @@
self._userprops['blockreason'] = r[0]['reason']
return self._userprops
- def registration(self, force: bool = False):
+ def registration(self,
+ force: bool = False) -> Optional[pywikibot.Timestamp]:
"""
Fetch registration date for this user.
:param force: if True, forces reloading the data from API
- :rtype: pywikibot.Timestamp or None
"""
if not self.isAnonymous():
reg = self.getprops(force).get('registration')
@@ -2718,7 +116,7 @@
return pywikibot.Timestamp.fromISOformat(reg)
return None
- def editCount(self, force: bool = False) -> int:
+ def editCount(self, force: bool = False) -> int: # noqa: N802
"""
Return edit count for a registered user.
@@ -2740,7 +138,7 @@
return 'blockedby' in self.getprops(force)
@deprecated('is_blocked', since='7.0.0')
- def isBlocked(self, force: bool = False) -> bool:
+ def isBlocked(self, force: bool = False) -> bool: # noqa: N802
"""Determine whether the user is currently blocked.
.. deprecated:: 7.0
@@ -2759,7 +157,7 @@
"""
return self.site.is_locked(self.username, force)
- def isEmailable(self, force: bool = False) -> bool:
+ def isEmailable(self, force: bool = False) -> bool: # noqa: N802
"""
Determine whether emails may be send to this user through MediaWiki.
@@ -2796,15 +194,13 @@
"""
return self.getprops(force).get('rights', [])
- def getUserPage(self, subpage: str = ''):
+ def getUserPage(self, subpage: str = '') -> Page: # noqa: N802
"""
Return a Page object relative to this user's main page.
:param subpage: subpage part to be appended to the main
page title (optional)
- :type subpage: str
:return: Page object of user page or user subpage
- :rtype: pywikibot.Page
"""
if self._isAutoblock:
# This user is probably being queried for purpose of lifting
@@ -2815,15 +211,13 @@
subpage = '/' + subpage
return Page(Link(self.title() + subpage, self.site))
- def getUserTalkPage(self, subpage: str = ''):
+ def getUserTalkPage(self, subpage: str = '') -> Page: # noqa: N802
"""
Return a Page object relative to this user's main talk page.
:param subpage: subpage part to be appended to the main
talk page title (optional)
- :type subpage: str
:return: Page object of user talk page or user talk subpage
- :rtype: pywikibot.Page
"""
if self._isAutoblock:
# This user is probably being queried for purpose of lifting
@@ -2960,22 +354,24 @@
contrib.get('comment'))
@property
- def first_edit(self):
+ def first_edit(
+ self
+ ) -> Optional[Tuple[Page, int, pywikibot.Timestamp, str]]:
"""Return first user contribution.
:return: first user contribution entry
:return: tuple of pywikibot.Page, revid, pywikibot.Timestamp, comment
- :rtype: tuple or None
"""
return next(self.contributions(reverse=True, total=1), None)
@property
- def last_edit(self):
+ def last_edit(
+ self
+ ) -> Optional[Tuple[Page, int, pywikibot.Timestamp, str]]:
"""Return last user contribution.
:return: last user contribution entry
:return: tuple of pywikibot.Page, revid, pywikibot.Timestamp, comment
- :rtype: tuple or None
"""
return next(self.contributions(total=1), None)
@@ -2998,7 +394,7 @@
for contrib in data['revisions']:
yield page, Revision(**contrib)
- def uploadedImages(self, total=10):
+ def uploadedImages(self, total: int = 10): # noqa: N802
"""
Yield tuples describing files uploaded by this user.
@@ -3007,7 +403,6 @@
Pages returned are not guaranteed to be unique.
:param total: limit result to this number of pages
- :type total: int
"""
if not self.isRegistered():
return
--
To view, visit https://gerrit.wikimedia.org/r/c/pywikibot/core/+/778471
To unsubscribe, or for help writing mail filters, visit https://gerrit.wikimedia.org/r/settings
Gerrit-Project: pywikibot/core
Gerrit-Branch: master
Gerrit-Change-Id: I97035c8a7eef4247dbcc0ac871526d9d7cab77d9
Gerrit-Change-Number: 778471
Gerrit-PatchSet: 6
Gerrit-Owner: Xqt <info(a)gno.de>
Gerrit-Reviewer: Xqt <info(a)gno.de>
Gerrit-Reviewer: jenkins-bot
Gerrit-MessageType: merged
jenkins-bot has submitted this change. ( https://gerrit.wikimedia.org/r/c/pywikibot/i18n/+/778222 )
Change subject: [i18n] Remove i18n files for unsupported or deleted scripTs
......................................................................
[i18n] Remove i18n files for unsupported or deleted scripTs
Bug: T223826
Change-Id: I4e13cc8494d1e15dd804b7f6822c2e7b61d00eec
---
D capitalize_redirects/ab.json
D capitalize_redirects/af.json
D capitalize_redirects/am.json
D capitalize_redirects/an.json
D capitalize_redirects/ar.json
D capitalize_redirects/arc.json
D capitalize_redirects/ast.json
D capitalize_redirects/av.json
D capitalize_redirects/awa.json
D capitalize_redirects/az.json
D capitalize_redirects/azb.json
D capitalize_redirects/ba.json
D capitalize_redirects/bcc.json
D capitalize_redirects/be-tarask.json
D capitalize_redirects/bg.json
D capitalize_redirects/bjn.json
D capitalize_redirects/blk.json
D capitalize_redirects/bn.json
D capitalize_redirects/bo.json
D capitalize_redirects/br.json
D capitalize_redirects/bs.json
D capitalize_redirects/ca.json
D capitalize_redirects/ce.json
D capitalize_redirects/ckb.json
D capitalize_redirects/cs.json
D capitalize_redirects/csb.json
D capitalize_redirects/cy.json
D capitalize_redirects/da.json
D capitalize_redirects/de.json
D capitalize_redirects/diq.json
D capitalize_redirects/dty.json
D capitalize_redirects/el.json
D capitalize_redirects/eml.json
D capitalize_redirects/en.json
D capitalize_redirects/eo.json
D capitalize_redirects/es.json
D capitalize_redirects/et.json
D capitalize_redirects/eu.json
D capitalize_redirects/fa.json
D capitalize_redirects/fi.json
D capitalize_redirects/fo.json
D capitalize_redirects/fr.json
D capitalize_redirects/frp.json
D capitalize_redirects/frr.json
D capitalize_redirects/gl.json
D capitalize_redirects/gsw.json
D capitalize_redirects/ha.json
D capitalize_redirects/hak.json
D capitalize_redirects/haw.json
D capitalize_redirects/he.json
D capitalize_redirects/hi.json
D capitalize_redirects/hif.json
D capitalize_redirects/hr.json
D capitalize_redirects/hu.json
D capitalize_redirects/hy.json
D capitalize_redirects/ia.json
D capitalize_redirects/id.json
D capitalize_redirects/ilo.json
D capitalize_redirects/io.json
D capitalize_redirects/is.json
D capitalize_redirects/it.json
D capitalize_redirects/ja.json
D capitalize_redirects/jv.json
D capitalize_redirects/kab.json
D capitalize_redirects/kjp.json
D capitalize_redirects/kk.json
D capitalize_redirects/kn.json
D capitalize_redirects/ko.json
D capitalize_redirects/ksh.json
D capitalize_redirects/ky.json
D capitalize_redirects/lb.json
D capitalize_redirects/li.json
D capitalize_redirects/lt.json
D capitalize_redirects/map-bms.json
D capitalize_redirects/mg.json
D capitalize_redirects/min.json
D capitalize_redirects/mk.json
D capitalize_redirects/ml.json
D capitalize_redirects/mnw.json
D capitalize_redirects/ms.json
D capitalize_redirects/mt.json
D capitalize_redirects/my.json
D capitalize_redirects/nan.json
D capitalize_redirects/nap.json
D capitalize_redirects/nb.json
D capitalize_redirects/nds-nl.json
D capitalize_redirects/ne.json
D capitalize_redirects/new.json
D capitalize_redirects/nl.json
D capitalize_redirects/nn.json
D capitalize_redirects/oc.json
D capitalize_redirects/pl.json
D capitalize_redirects/pms.json
D capitalize_redirects/pt-br.json
D capitalize_redirects/pt.json
D capitalize_redirects/qqq.json
D capitalize_redirects/ro.json
D capitalize_redirects/ru.json
D capitalize_redirects/sa.json
D capitalize_redirects/sco.json
D capitalize_redirects/shn.json
D capitalize_redirects/sk.json
D capitalize_redirects/sl.json
D capitalize_redirects/so.json
D capitalize_redirects/sq.json
D capitalize_redirects/sr.json
D capitalize_redirects/su.json
D capitalize_redirects/sv.json
D capitalize_redirects/szl.json
D capitalize_redirects/ta.json
D capitalize_redirects/te.json
D capitalize_redirects/tg.json
D capitalize_redirects/th.json
D capitalize_redirects/tl.json
D capitalize_redirects/tly.json
D capitalize_redirects/tr.json
D capitalize_redirects/tt.json
D capitalize_redirects/ug.json
D capitalize_redirects/uk.json
D capitalize_redirects/ur.json
D capitalize_redirects/uz.json
D capitalize_redirects/vec.json
D capitalize_redirects/vi.json
D capitalize_redirects/vo.json
D capitalize_redirects/wa.json
D capitalize_redirects/war.json
D capitalize_redirects/xmf.json
D capitalize_redirects/yi.json
D capitalize_redirects/yo.json
D capitalize_redirects/zh.json
D casechecker/ab.json
D casechecker/an.json
D casechecker/ar.json
D casechecker/ast.json
D casechecker/av.json
D casechecker/awa.json
D casechecker/ba.json
D casechecker/bcc.json
D casechecker/be-tarask.json
D casechecker/bg.json
D casechecker/bjn.json
D casechecker/blk.json
D casechecker/bn.json
D casechecker/bo.json
D casechecker/br.json
D casechecker/bs.json
D casechecker/ca.json
D casechecker/ce.json
D casechecker/cs.json
D casechecker/csb.json
D casechecker/da.json
D casechecker/de.json
D casechecker/diq.json
D casechecker/dty.json
D casechecker/el.json
D casechecker/en.json
D casechecker/eo.json
D casechecker/es.json
D casechecker/eu.json
D casechecker/fa.json
D casechecker/fr.json
D casechecker/frr.json
D casechecker/fy.json
D casechecker/gl.json
D casechecker/gsw.json
D casechecker/ha.json
D casechecker/hak.json
D casechecker/haw.json
D casechecker/he.json
D casechecker/hi.json
D casechecker/hsb.json
D casechecker/hu.json
D casechecker/hy.json
D casechecker/ia.json
D casechecker/id.json
D casechecker/io.json
D casechecker/is.json
D casechecker/it.json
D casechecker/ja.json
D casechecker/jv.json
D casechecker/kab.json
D casechecker/kk.json
D casechecker/km.json
D casechecker/ko.json
D casechecker/ksh.json
D casechecker/lb.json
D casechecker/li.json
D casechecker/lt.json
D casechecker/lv.json
D casechecker/mg.json
D casechecker/mk.json
D casechecker/ml.json
D casechecker/mnw.json
D casechecker/ms.json
D casechecker/my.json
D casechecker/nan.json
D casechecker/nap.json
D casechecker/nb.json
D casechecker/nds-nl.json
D casechecker/ne.json
D casechecker/nl.json
D casechecker/nn.json
D casechecker/oc.json
D casechecker/pdc.json
D casechecker/pl.json
D casechecker/pms.json
D casechecker/pt-br.json
D casechecker/pt.json
D casechecker/qqq.json
D casechecker/ro.json
D casechecker/ru.json
D casechecker/sco.json
D casechecker/sk.json
D casechecker/so.json
D casechecker/sr.json
D casechecker/su.json
D casechecker/sv.json
D casechecker/te.json
D casechecker/tg.json
D casechecker/th.json
D casechecker/tr.json
D casechecker/uk.json
D casechecker/ur.json
D casechecker/uz.json
D casechecker/vi.json
D casechecker/yi.json
D casechecker/zh.json
D catall/ab.json
D catall/af.json
D catall/am.json
D catall/ar.json
D catall/arc.json
D catall/ast.json
D catall/av.json
D catall/awa.json
D catall/az.json
D catall/azb.json
D catall/ba.json
D catall/bcc.json
D catall/be-tarask.json
D catall/bg.json
D catall/bjn.json
D catall/blk.json
D catall/bn.json
D catall/bo.json
D catall/br.json
D catall/bs.json
D catall/ca.json
D catall/ce.json
D catall/ckb.json
D catall/cs.json
D catall/csb.json
D catall/cy.json
D catall/da.json
D catall/de.json
D catall/diq.json
D catall/dty.json
D catall/el.json
D catall/eml.json
D catall/en.json
D catall/eo.json
D catall/es.json
D catall/et.json
D catall/eu.json
D catall/fa.json
D catall/fi.json
D catall/fo.json
D catall/fr.json
D catall/frp.json
D catall/frr.json
D catall/gl.json
D catall/gsw.json
D catall/ha.json
D catall/hak.json
D catall/haw.json
D catall/he.json
D catall/hi.json
D catall/hr.json
D catall/hu.json
D catall/hy.json
D catall/ia.json
D catall/id.json
D catall/ilo.json
D catall/io.json
D catall/is.json
D catall/it.json
D catall/ja.json
D catall/jv.json
D catall/kab.json
D catall/kk.json
D catall/kn.json
D catall/ko.json
D catall/ksh.json
D catall/ku.json
D catall/ky.json
D catall/lb.json
D catall/li.json
D catall/lt.json
D catall/lv.json
D catall/map-bms.json
D catall/mg.json
D catall/min.json
D catall/mk.json
D catall/ml.json
D catall/mnw.json
D catall/mr.json
D catall/ms.json
D catall/my.json
D catall/nan.json
D catall/nap.json
D catall/nb.json
D catall/nds-nl.json
D catall/ne.json
D catall/new.json
D catall/nl.json
D catall/nn.json
D catall/oc.json
D catall/olo.json
D catall/pdc.json
D catall/pl.json
D catall/pms.json
D catall/ps.json
D catall/pt-br.json
D catall/pt.json
D catall/qqq.json
D catall/ro.json
D catall/ru.json
D catall/sa.json
D catall/scn.json
D catall/sco.json
D catall/sh.json
D catall/si.json
D catall/sk.json
D catall/sl.json
D catall/so.json
D catall/sq.json
D catall/sr.json
D catall/su.json
D catall/sv.json
D catall/sw.json
D catall/szl.json
D catall/ta.json
D catall/te.json
D catall/tg.json
D catall/th.json
D catall/tl.json
D catall/tly.json
D catall/tr.json
D catall/ug.json
D catall/uk.json
D catall/ur.json
D catall/uz.json
D catall/vec.json
D catall/vi.json
D catall/vo.json
D catall/wa.json
D catall/war.json
D catall/xmf.json
D catall/yi.json
D catall/yo.json
D catall/zh.json
D commons/ar.json
D commons/ast.json
D commons/awa.json
D commons/az.json
D commons/azb.json
D commons/ba.json
D commons/bar.json
D commons/bcc.json
D commons/be-tarask.json
D commons/bg.json
D commons/bjn.json
D commons/bn.json
D commons/bo.json
D commons/br.json
D commons/bs.json
D commons/ca.json
D commons/ce.json
D commons/cs.json
D commons/csb.json
D commons/cy.json
D commons/da.json
D commons/de.json
D commons/diq.json
D commons/el.json
D commons/en.json
D commons/eo.json
D commons/es.json
D commons/eu.json
D commons/fa.json
D commons/fi.json
D commons/fo.json
D commons/fr.json
D commons/frp.json
D commons/frr.json
D commons/gl.json
D commons/gsw.json
D commons/hak.json
D commons/haw.json
D commons/he.json
D commons/hi.json
D commons/hsb.json
D commons/hu.json
D commons/hy.json
D commons/ia.json
D commons/id.json
D commons/ie.json
D commons/ilo.json
D commons/is.json
D commons/it.json
D commons/ja.json
D commons/jv.json
D commons/kab.json
D commons/kk.json
D commons/ko.json
D commons/ksh.json
D commons/ky.json
D commons/lb.json
D commons/li.json
D commons/lt.json
D commons/ltg.json
D commons/map-bms.json
D commons/mg.json
D commons/min.json
D commons/mk.json
D commons/ml.json
D commons/mr.json
D commons/ms.json
D commons/mwl.json
D commons/my.json
D commons/nan.json
D commons/nb.json
D commons/nds-nl.json
D commons/ne.json
D commons/nl.json
D commons/nn.json
D commons/oc.json
D commons/or.json
D commons/pdc.json
D commons/pl.json
D commons/pms.json
D commons/pt-br.json
D commons/pt.json
D commons/qqq.json
D commons/ro.json
D commons/ru.json
D commons/scn.json
D commons/sco.json
D commons/sgs.json
D commons/sh.json
D commons/sk.json
D commons/sl.json
D commons/so.json
D commons/sq.json
D commons/sr.json
D commons/sv.json
D commons/szl.json
D commons/te.json
D commons/tg.json
D commons/th.json
D commons/tl.json
D commons/tly.json
D commons/tr.json
D commons/tt.json
D commons/uk.json
D commons/ur.json
D commons/vec.json
D commons/vi.json
D commons/wa.json
D commons/yi.json
D commons/yo.json
D commons/zh.json
D commons_link/ar.json
D commons_link/ast.json
D commons_link/awa.json
D commons_link/ba.json
D commons_link/bcc.json
D commons_link/be-tarask.json
D commons_link/bjn.json
D commons_link/bo.json
D commons_link/br.json
D commons_link/bs.json
D commons_link/ca.json
D commons_link/ce.json
D commons_link/cs.json
D commons_link/csb.json
D commons_link/da.json
D commons_link/de.json
D commons_link/diq.json
D commons_link/el.json
D commons_link/en.json
D commons_link/eo.json
D commons_link/es.json
D commons_link/eu.json
D commons_link/fa.json
D commons_link/fr.json
D commons_link/frr.json
D commons_link/gl.json
D commons_link/gsw.json
D commons_link/haw.json
D commons_link/he.json
D commons_link/hi.json
D commons_link/hu.json
D commons_link/ia.json
D commons_link/id.json
D commons_link/it.json
D commons_link/ja.json
D commons_link/kk.json
D commons_link/ko.json
D commons_link/ksh.json
D commons_link/lb.json
D commons_link/li.json
D commons_link/lt.json
D commons_link/mg.json
D commons_link/mk.json
D commons_link/ml.json
D commons_link/ms.json
D commons_link/my.json
D commons_link/nan.json
D commons_link/nb.json
D commons_link/nds-nl.json
D commons_link/ne.json
D commons_link/nl.json
D commons_link/oc.json
D commons_link/pl.json
D commons_link/pt-br.json
D commons_link/pt.json
D commons_link/qqq.json
D commons_link/ro.json
D commons_link/ru.json
D commons_link/sco.json
D commons_link/sk.json
D commons_link/sl.json
D commons_link/sr.json
D commons_link/su.json
D commons_link/sv.json
D commons_link/ta.json
D commons_link/te.json
D commons_link/th.json
D commons_link/tr.json
D commons_link/uk.json
D commons_link/ur.json
D commons_link/vi.json
D commons_link/yi.json
D commons_link/zh.json
D disambredir/ar.json
D disambredir/ast.json
D disambredir/ba.json
D disambredir/be-tarask.json
D disambredir/bg.json
D disambredir/bn.json
D disambredir/br.json
D disambredir/bs.json
D disambredir/ca.json
D disambredir/cs.json
D disambredir/csb.json
D disambredir/da.json
D disambredir/de.json
D disambredir/diq.json
D disambredir/el.json
D disambredir/en.json
D disambredir/eo.json
D disambredir/es.json
D disambredir/eu.json
D disambredir/fa.json
D disambredir/fi.json
D disambredir/fr.json
D disambredir/frr.json
D disambredir/gl.json
D disambredir/he.json
D disambredir/hi.json
D disambredir/hu.json
D disambredir/ia.json
D disambredir/id.json
D disambredir/it.json
D disambredir/ja.json
D disambredir/ko.json
D disambredir/ksh.json
D disambredir/lb.json
D disambredir/li.json
D disambredir/mg.json
D disambredir/mk.json
D disambredir/ml.json
D disambredir/ms.json
D disambredir/my.json
D disambredir/nan.json
D disambredir/nb.json
D disambredir/nds-nl.json
D disambredir/nl.json
D disambredir/pl.json
D disambredir/pms.json
D disambredir/pt-br.json
D disambredir/pt.json
D disambredir/qqq.json
D disambredir/ro.json
D disambredir/ru.json
D disambredir/sk.json
D disambredir/sr.json
D disambredir/su.json
D disambredir/sv.json
D disambredir/ta.json
D disambredir/th.json
D disambredir/tr.json
D disambredir/uk.json
D disambredir/ur.json
D disambredir/vi.json
D disambredir/zh.json
D editarticle/ar.json
D editarticle/ast.json
D editarticle/awa.json
D editarticle/az.json
D editarticle/azb.json
D editarticle/ba.json
D editarticle/bar.json
D editarticle/bcc.json
D editarticle/be-tarask.json
D editarticle/bg.json
D editarticle/bjn.json
D editarticle/bn.json
D editarticle/br.json
D editarticle/bs.json
D editarticle/ca.json
D editarticle/ce.json
D editarticle/ckb.json
D editarticle/cs.json
D editarticle/csb.json
D editarticle/cy.json
D editarticle/da.json
D editarticle/de.json
D editarticle/diq.json
D editarticle/el.json
D editarticle/en.json
D editarticle/eo.json
D editarticle/es.json
D editarticle/eu.json
D editarticle/fa.json
D editarticle/fi.json
D editarticle/fo.json
D editarticle/fr.json
D editarticle/frp.json
D editarticle/frr.json
D editarticle/gl.json
D editarticle/gsw.json
D editarticle/hak.json
D editarticle/haw.json
D editarticle/he.json
D editarticle/hi.json
D editarticle/hsb.json
D editarticle/hu.json
D editarticle/ia.json
D editarticle/id.json
D editarticle/ie.json
D editarticle/ilo.json
D editarticle/is.json
D editarticle/it.json
D editarticle/ja.json
D editarticle/jv.json
D editarticle/kab.json
D editarticle/kk.json
D editarticle/km.json
D editarticle/kn.json
D editarticle/ko.json
D editarticle/ksh.json
D editarticle/ky.json
D editarticle/lb.json
D editarticle/li.json
D editarticle/lki.json
D editarticle/lt.json
D editarticle/map-bms.json
D editarticle/mg.json
D editarticle/min.json
D editarticle/mk.json
D editarticle/ml.json
D editarticle/ms.json
D editarticle/my.json
D editarticle/nan.json
D editarticle/nb.json
D editarticle/nds-nl.json
D editarticle/ne.json
D editarticle/new.json
D editarticle/nl.json
D editarticle/oc.json
D editarticle/or.json
D editarticle/pam.json
D editarticle/pl.json
D editarticle/pms.json
D editarticle/pt-br.json
D editarticle/pt.json
D editarticle/qqq.json
D editarticle/ro.json
D editarticle/ru.json
D editarticle/sco.json
D editarticle/sk.json
D editarticle/sl.json
D editarticle/so.json
D editarticle/sq.json
D editarticle/sr.json
D editarticle/su.json
D editarticle/sv.json
D editarticle/sw.json
D editarticle/szl.json
D editarticle/te.json
D editarticle/th.json
D editarticle/tl.json
D editarticle/tly.json
D editarticle/tr.json
D editarticle/tt.json
D editarticle/uk.json
D editarticle/ur.json
D editarticle/uz.json
D editarticle/vec.json
D editarticle/vi.json
D editarticle/war.json
D editarticle/xmf.json
D editarticle/yi.json
D editarticle/zh.json
D followlive/af.json
D followlive/ar.json
D followlive/ast.json
D followlive/ba.json
D followlive/be-tarask.json
D followlive/bg.json
D followlive/br.json
D followlive/bs.json
D followlive/ca.json
D followlive/ckb.json
D followlive/cs.json
D followlive/csb.json
D followlive/da.json
D followlive/de.json
D followlive/diq.json
D followlive/el.json
D followlive/en.json
D followlive/eo.json
D followlive/es.json
D followlive/eu.json
D followlive/fa.json
D followlive/fr.json
D followlive/frr.json
D followlive/gl.json
D followlive/he.json
D followlive/hi.json
D followlive/hu.json
D followlive/ia.json
D followlive/id.json
D followlive/it.json
D followlive/ja.json
D followlive/kab.json
D followlive/km.json
D followlive/ko.json
D followlive/ksh.json
D followlive/lb.json
D followlive/li.json
D followlive/mg.json
D followlive/mk.json
D followlive/ml.json
D followlive/mni.json
D followlive/ms.json
D followlive/my.json
D followlive/nan.json
D followlive/nb.json
D followlive/nds-nl.json
D followlive/nl.json
D followlive/pl.json
D followlive/pms.json
D followlive/pt-br.json
D followlive/pt.json
D followlive/qqq.json
D followlive/ro.json
D followlive/ru.json
D followlive/scn.json
D followlive/sk.json
D followlive/sr.json
D followlive/su.json
D followlive/sv.json
D followlive/th.json
D followlive/tr.json
D followlive/uk.json
D followlive/ur.json
D followlive/vi.json
D followlive/zh.json
D isbn/af.json
D isbn/ar.json
D isbn/ast.json
D isbn/az.json
D isbn/azb.json
D isbn/ba.json
D isbn/bar.json
D isbn/bcc.json
D isbn/be-tarask.json
D isbn/be.json
D isbn/bg.json
D isbn/bjn.json
D isbn/bn.json
D isbn/br.json
D isbn/bs.json
D isbn/ca.json
D isbn/ce.json
D isbn/ckb.json
D isbn/cs.json
D isbn/csb.json
D isbn/cv.json
D isbn/cy.json
D isbn/da.json
D isbn/de.json
D isbn/diq.json
D isbn/el.json
D isbn/en.json
D isbn/eo.json
D isbn/es.json
D isbn/et.json
D isbn/eu.json
D isbn/fa.json
D isbn/fi.json
D isbn/fo.json
D isbn/fr.json
D isbn/frp.json
D isbn/frr.json
D isbn/fur.json
D isbn/gl.json
D isbn/gsw.json
D isbn/hak.json
D isbn/haw.json
D isbn/he.json
D isbn/hi.json
D isbn/hr.json
D isbn/hsb.json
D isbn/hu.json
D isbn/hy.json
D isbn/ia.json
D isbn/id.json
D isbn/ie.json
D isbn/ilo.json
D isbn/is.json
D isbn/it.json
D isbn/ja.json
D isbn/jv.json
D isbn/kab.json
D isbn/kk.json
D isbn/ko.json
D isbn/ksh.json
D isbn/ku.json
D isbn/lb.json
D isbn/li.json
D isbn/lt.json
D isbn/lv.json
D isbn/map-bms.json
D isbn/mg.json
D isbn/min.json
D isbn/mk.json
D isbn/ml.json
D isbn/mr.json
D isbn/ms.json
D isbn/mt.json
D isbn/my.json
D isbn/nan.json
D isbn/nb.json
D isbn/nds-nl.json
D isbn/nds.json
D isbn/ne.json
D isbn/nl.json
D isbn/nn.json
D isbn/oc.json
D isbn/os.json
D isbn/pl.json
D isbn/pms.json
D isbn/pt-br.json
D isbn/pt.json
D isbn/qqq.json
D isbn/ro.json
D isbn/ru.json
D isbn/rue.json
D isbn/sco.json
D isbn/si.json
D isbn/sk.json
D isbn/sl.json
D isbn/so.json
D isbn/sq.json
D isbn/sr.json
D isbn/su.json
D isbn/sv.json
D isbn/szl.json
D isbn/ta.json
D isbn/te.json
D isbn/th.json
D isbn/tl.json
D isbn/tly.json
D isbn/tr.json
D isbn/tt.json
D isbn/uk.json
D isbn/ur.json
D isbn/uz.json
D isbn/vec.json
D isbn/vi.json
D isbn/war.json
D isbn/yi.json
D isbn/zh.json
D lonelypages/af.json
D lonelypages/ar.json
D lonelypages/ast.json
D lonelypages/azb.json
D lonelypages/ba.json
D lonelypages/bcc.json
D lonelypages/be-tarask.json
D lonelypages/bg.json
D lonelypages/bn.json
D lonelypages/br.json
D lonelypages/bs.json
D lonelypages/ca.json
D lonelypages/ce.json
D lonelypages/ckb.json
D lonelypages/cs.json
D lonelypages/csb.json
D lonelypages/da.json
D lonelypages/de.json
D lonelypages/diq.json
D lonelypages/el.json
D lonelypages/en.json
D lonelypages/eo.json
D lonelypages/es.json
D lonelypages/eu.json
D lonelypages/fa.json
D lonelypages/fo.json
D lonelypages/fr.json
D lonelypages/frr.json
D lonelypages/gl.json
D lonelypages/gsw.json
D lonelypages/haw.json
D lonelypages/he.json
D lonelypages/hi.json
D lonelypages/hu.json
D lonelypages/ia.json
D lonelypages/id.json
D lonelypages/it.json
D lonelypages/ja.json
D lonelypages/kab.json
D lonelypages/kk.json
D lonelypages/ko.json
D lonelypages/ksh.json
D lonelypages/lb.json
D lonelypages/lt.json
D lonelypages/lv.json
D lonelypages/mg.json
D lonelypages/mk.json
D lonelypages/ml.json
D lonelypages/mni.json
D lonelypages/ms.json
D lonelypages/my.json
D lonelypages/nan.json
D lonelypages/nb.json
D lonelypages/nds-nl.json
D lonelypages/ne.json
D lonelypages/nl.json
D lonelypages/oc.json
D lonelypages/pl.json
D lonelypages/pt-br.json
D lonelypages/pt.json
D lonelypages/qqq.json
D lonelypages/ro.json
D lonelypages/ru.json
D lonelypages/sco.json
D lonelypages/sk.json
D lonelypages/sr.json
D lonelypages/su.json
D lonelypages/sv.json
D lonelypages/te.json
D lonelypages/th.json
D lonelypages/tr.json
D lonelypages/uk.json
D lonelypages/ur.json
D lonelypages/vi.json
D lonelypages/yi.json
D lonelypages/zh.json
D makecat/ar.json
D makecat/ast.json
D makecat/azb.json
D makecat/ba.json
D makecat/bcc.json
D makecat/be-tarask.json
D makecat/bg.json
D makecat/br.json
D makecat/bs.json
D makecat/ca.json
D makecat/ce.json
D makecat/ckb.json
D makecat/cs.json
D makecat/csb.json
D makecat/da.json
D makecat/de.json
D makecat/diq.json
D makecat/el.json
D makecat/en.json
D makecat/eo.json
D makecat/es.json
D makecat/eu.json
D makecat/fa.json
D makecat/fr.json
D makecat/frr.json
D makecat/gl.json
D makecat/gn.json
D makecat/haw.json
D makecat/he.json
D makecat/hi.json
D makecat/hu.json
D makecat/ia.json
D makecat/id.json
D makecat/it.json
D makecat/ja.json
D makecat/kab.json
D makecat/kk.json
D makecat/ko.json
D makecat/ksh.json
D makecat/lb.json
D makecat/lki.json
D makecat/lt.json
D makecat/mg.json
D makecat/mk.json
D makecat/ml.json
D makecat/ms.json
D makecat/my.json
D makecat/nan.json
D makecat/nap.json
D makecat/nb.json
D makecat/nds-nl.json
D makecat/ne.json
D makecat/nl.json
D makecat/nn.json
D makecat/oc.json
D makecat/pl.json
D makecat/pms.json
D makecat/pt-br.json
D makecat/pt.json
D makecat/qqq.json
D makecat/ro.json
D makecat/ru.json
D makecat/sco.json
D makecat/sk.json
D makecat/sr.json
D makecat/sv.json
D makecat/te.json
D makecat/th.json
D makecat/tr.json
D makecat/uk.json
D makecat/ur.json
D makecat/vi.json
D makecat/zh.json
D ndashredir/af.json
D ndashredir/ar.json
D ndashredir/ast.json
D ndashredir/az.json
D ndashredir/azb.json
D ndashredir/ba.json
D ndashredir/bcc.json
D ndashredir/be-tarask.json
D ndashredir/br.json
D ndashredir/bs.json
D ndashredir/ca.json
D ndashredir/ce.json
D ndashredir/cs.json
D ndashredir/csb.json
D ndashredir/cy.json
D ndashredir/da.json
D ndashredir/de.json
D ndashredir/diq.json
D ndashredir/el.json
D ndashredir/en.json
D ndashredir/eo.json
D ndashredir/es.json
D ndashredir/eu.json
D ndashredir/fa.json
D ndashredir/fo.json
D ndashredir/fr.json
D ndashredir/frp.json
D ndashredir/frr.json
D ndashredir/gl.json
D ndashredir/gsw.json
D ndashredir/hak.json
D ndashredir/haw.json
D ndashredir/he.json
D ndashredir/hi.json
D ndashredir/hu.json
D ndashredir/ia.json
D ndashredir/id.json
D ndashredir/is.json
D ndashredir/it.json
D ndashredir/ja.json
D ndashredir/jv.json
D ndashredir/ko.json
D ndashredir/ksh.json
D ndashredir/ky.json
D ndashredir/lb.json
D ndashredir/lt.json
D ndashredir/map-bms.json
D ndashredir/mg.json
D ndashredir/min.json
D ndashredir/mk.json
D ndashredir/ml.json
D ndashredir/ms.json
D ndashredir/nan.json
D ndashredir/nb.json
D ndashredir/nds-nl.json
D ndashredir/ne.json
D ndashredir/new.json
D ndashredir/nl.json
D ndashredir/oc.json
D ndashredir/pl.json
D ndashredir/pms.json
D ndashredir/pt-br.json
D ndashredir/pt.json
D ndashredir/qqq.json
D ndashredir/ro.json
D ndashredir/ru.json
D ndashredir/sco.json
D ndashredir/sk.json
D ndashredir/sl.json
D ndashredir/so.json
D ndashredir/sr.json
D ndashredir/su.json
D ndashredir/sv.json
D ndashredir/th.json
D ndashredir/tl.json
D ndashredir/tly.json
D ndashredir/tr.json
D ndashredir/uk.json
D ndashredir/vec.json
D ndashredir/vi.json
D ndashredir/war.json
D ndashredir/yi.json
D ndashredir/zh.json
D piper/ar.json
D piper/ast.json
D piper/ba.json
D piper/be-tarask.json
D piper/br.json
D piper/bs.json
D piper/ca.json
D piper/cs.json
D piper/csb.json
D piper/da.json
D piper/de.json
D piper/diq.json
D piper/el.json
D piper/en.json
D piper/eo.json
D piper/es.json
D piper/eu.json
D piper/fa.json
D piper/fr.json
D piper/frr.json
D piper/gl.json
D piper/he.json
D piper/hi.json
D piper/hu.json
D piper/ia.json
D piper/id.json
D piper/is.json
D piper/it.json
D piper/ja.json
D piper/ko.json
D piper/ksh.json
D piper/lb.json
D piper/mg.json
D piper/mk.json
D piper/ml.json
D piper/nan.json
D piper/nb.json
D piper/nds-nl.json
D piper/nl.json
D piper/pl.json
D piper/pms.json
D piper/pt-br.json
D piper/pt.json
D piper/qqq.json
D piper/ro.json
D piper/ru.json
D piper/sk.json
D piper/sr.json
D piper/sv.json
D piper/th.json
D piper/tr.json
D piper/uk.json
D piper/vi.json
D piper/zh.json
D remove_edp_images/ar.json
D remove_edp_images/ast.json
D remove_edp_images/ba.json
D remove_edp_images/be-tarask.json
D remove_edp_images/br.json
D remove_edp_images/bs.json
D remove_edp_images/ca.json
D remove_edp_images/cs.json
D remove_edp_images/csb.json
D remove_edp_images/da.json
D remove_edp_images/de.json
D remove_edp_images/diq.json
D remove_edp_images/el.json
D remove_edp_images/en.json
D remove_edp_images/eo.json
D remove_edp_images/es.json
D remove_edp_images/eu.json
D remove_edp_images/fa.json
D remove_edp_images/fr.json
D remove_edp_images/frr.json
D remove_edp_images/gl.json
D remove_edp_images/he.json
D remove_edp_images/hi.json
D remove_edp_images/hu.json
D remove_edp_images/ia.json
D remove_edp_images/id.json
D remove_edp_images/it.json
D remove_edp_images/ja.json
D remove_edp_images/kab.json
D remove_edp_images/ko.json
D remove_edp_images/ksh.json
D remove_edp_images/lb.json
D remove_edp_images/mg.json
D remove_edp_images/mk.json
D remove_edp_images/ml.json
D remove_edp_images/nan.json
D remove_edp_images/nb.json
D remove_edp_images/nds-nl.json
D remove_edp_images/nl.json
D remove_edp_images/pl.json
D remove_edp_images/pms.json
D remove_edp_images/pt-br.json
D remove_edp_images/pt.json
D remove_edp_images/qqq.json
D remove_edp_images/ro.json
D remove_edp_images/ru.json
D remove_edp_images/sk.json
D remove_edp_images/sr.json
D remove_edp_images/sv.json
D remove_edp_images/th.json
D remove_edp_images/tr.json
D remove_edp_images/uk.json
D remove_edp_images/vi.json
D remove_edp_images/zh.json
D selflink/ar.json
D selflink/ast.json
D selflink/ba.json
D selflink/bcc.json
D selflink/be-tarask.json
D selflink/bg.json
D selflink/bn.json
D selflink/br.json
D selflink/bs.json
D selflink/ca.json
D selflink/ce.json
D selflink/ckb.json
D selflink/cs.json
D selflink/csb.json
D selflink/da.json
D selflink/de.json
D selflink/diq.json
D selflink/el.json
D selflink/en.json
D selflink/eo.json
D selflink/es.json
D selflink/eu.json
D selflink/fa.json
D selflink/fr.json
D selflink/frr.json
D selflink/gl.json
D selflink/he.json
D selflink/hi.json
D selflink/hu.json
D selflink/ia.json
D selflink/id.json
D selflink/it.json
D selflink/ja.json
D selflink/kk.json
D selflink/ko.json
D selflink/ksh.json
D selflink/lb.json
D selflink/li.json
D selflink/lt.json
D selflink/mg.json
D selflink/mk.json
D selflink/ml.json
D selflink/ms.json
D selflink/nan.json
D selflink/nb.json
D selflink/nds-nl.json
D selflink/ne.json
D selflink/nl.json
D selflink/nn.json
D selflink/pl.json
D selflink/pms.json
D selflink/pt-br.json
D selflink/pt.json
D selflink/qqq.json
D selflink/ro.json
D selflink/ru.json
D selflink/sk.json
D selflink/sr.json
D selflink/su.json
D selflink/sv.json
D selflink/ta.json
D selflink/th.json
D selflink/tr.json
D selflink/uk.json
D selflink/ur.json
D selflink/vi.json
D selflink/zh.json
D spamremove/ar.json
D spamremove/ast.json
D spamremove/ba.json
D spamremove/bcc.json
D spamremove/be-tarask.json
D spamremove/be.json
D spamremove/br.json
D spamremove/bs.json
D spamremove/ca.json
D spamremove/ce.json
D spamremove/cs.json
D spamremove/csb.json
D spamremove/da.json
D spamremove/de.json
D spamremove/diq.json
D spamremove/el.json
D spamremove/en.json
D spamremove/eo.json
D spamremove/es.json
D spamremove/eu.json
D spamremove/fa.json
D spamremove/fr.json
D spamremove/frr.json
D spamremove/gl.json
D spamremove/gn.json
D spamremove/haw.json
D spamremove/he.json
D spamremove/hi.json
D spamremove/hu.json
D spamremove/ia.json
D spamremove/id.json
D spamremove/it.json
D spamremove/ja.json
D spamremove/kk.json
D spamremove/ko.json
D spamremove/ksh.json
D spamremove/lb.json
D spamremove/li.json
D spamremove/lt.json
D spamremove/mg.json
D spamremove/mk.json
D spamremove/ml.json
D spamremove/ms.json
D spamremove/nan.json
D spamremove/nb.json
D spamremove/nds-nl.json
D spamremove/ne.json
D spamremove/nl.json
D spamremove/oc.json
D spamremove/pl.json
D spamremove/pms.json
D spamremove/pt-br.json
D spamremove/pt.json
D spamremove/qqq.json
D spamremove/ro.json
D spamremove/ru.json
D spamremove/sco.json
D spamremove/sk.json
D spamremove/sr.json
D spamremove/su.json
D spamremove/sv.json
D spamremove/ta.json
D spamremove/th.json
D spamremove/tr.json
D spamremove/uk.json
D spamremove/vi.json
D spamremove/zh.json
D spellcheck/ar.json
D spellcheck/ast.json
D spellcheck/az.json
D spellcheck/azb.json
D spellcheck/ba.json
D spellcheck/bcc.json
D spellcheck/be-tarask.json
D spellcheck/br.json
D spellcheck/bs.json
D spellcheck/ca.json
D spellcheck/ce.json
D spellcheck/cs.json
D spellcheck/csb.json
D spellcheck/da.json
D spellcheck/de.json
D spellcheck/diq.json
D spellcheck/el.json
D spellcheck/en.json
D spellcheck/eo.json
D spellcheck/es.json
D spellcheck/eu.json
D spellcheck/fa.json
D spellcheck/fi.json
D spellcheck/fr.json
D spellcheck/frr.json
D spellcheck/gl.json
D spellcheck/gn.json
D spellcheck/gsw.json
D spellcheck/hak.json
D spellcheck/haw.json
D spellcheck/he.json
D spellcheck/hi.json
D spellcheck/hu.json
D spellcheck/ia.json
D spellcheck/id.json
D spellcheck/it.json
D spellcheck/ja.json
D spellcheck/ko.json
D spellcheck/ksh.json
D spellcheck/lb.json
D spellcheck/lt.json
D spellcheck/map-bms.json
D spellcheck/mg.json
D spellcheck/mk.json
D spellcheck/ml.json
D spellcheck/ms.json
D spellcheck/nan.json
D spellcheck/nb.json
D spellcheck/nds-nl.json
D spellcheck/ne.json
D spellcheck/new.json
D spellcheck/nl.json
D spellcheck/oc.json
D spellcheck/pl.json
D spellcheck/pms.json
D spellcheck/pt-br.json
D spellcheck/pt.json
D spellcheck/qqq.json
D spellcheck/ro.json
D spellcheck/ru.json
D spellcheck/sco.json
D spellcheck/sh.json
D spellcheck/sk.json
D spellcheck/sl.json
D spellcheck/so.json
D spellcheck/sr.json
D spellcheck/sv.json
D spellcheck/th.json
D spellcheck/tly.json
D spellcheck/tr.json
D spellcheck/uk.json
D spellcheck/ur.json
D spellcheck/vi.json
D spellcheck/zh.json
D standardize_interwiki/ar.json
D standardize_interwiki/ast.json
D standardize_interwiki/ba.json
D standardize_interwiki/be-tarask.json
D standardize_interwiki/br.json
D standardize_interwiki/ca.json
D standardize_interwiki/ckb.json
D standardize_interwiki/cs.json
D standardize_interwiki/da.json
D standardize_interwiki/de.json
D standardize_interwiki/diq.json
D standardize_interwiki/el.json
D standardize_interwiki/en.json
D standardize_interwiki/eo.json
D standardize_interwiki/es.json
D standardize_interwiki/fa.json
D standardize_interwiki/fr.json
D standardize_interwiki/frr.json
D standardize_interwiki/gl.json
D standardize_interwiki/he.json
D standardize_interwiki/hi.json
D standardize_interwiki/ia.json
D standardize_interwiki/id.json
D standardize_interwiki/it.json
D standardize_interwiki/ja.json
D standardize_interwiki/ko.json
D standardize_interwiki/ksh.json
D standardize_interwiki/mk.json
D standardize_interwiki/ml.json
D standardize_interwiki/nb.json
D standardize_interwiki/nds-nl.json
D standardize_interwiki/nds.json
D standardize_interwiki/nl.json
D standardize_interwiki/no.json
D standardize_interwiki/pms.json
D standardize_interwiki/pt-br.json
D standardize_interwiki/pt.json
D standardize_interwiki/qqq.json
D standardize_interwiki/ro.json
D standardize_interwiki/ru.json
D standardize_interwiki/sk.json
D standardize_interwiki/sr.json
D standardize_interwiki/sv.json
D standardize_interwiki/tr.json
D standardize_interwiki/uk.json
D standardize_interwiki/ur.json
D standardize_interwiki/vi.json
D standardize_interwiki/zh.json
D states_redirect/ar.json
D states_redirect/ast.json
D states_redirect/ba.json
D states_redirect/be-tarask.json
D states_redirect/br.json
D states_redirect/bs.json
D states_redirect/ca.json
D states_redirect/cs.json
D states_redirect/csb.json
D states_redirect/da.json
D states_redirect/de.json
D states_redirect/diq.json
D states_redirect/el.json
D states_redirect/en.json
D states_redirect/eo.json
D states_redirect/es.json
D states_redirect/eu.json
D states_redirect/fa.json
D states_redirect/fr.json
D states_redirect/frr.json
D states_redirect/gl.json
D states_redirect/he.json
D states_redirect/hi.json
D states_redirect/hu.json
D states_redirect/ia.json
D states_redirect/id.json
D states_redirect/it.json
D states_redirect/ja.json
D states_redirect/ko.json
D states_redirect/ksh.json
D states_redirect/lb.json
D states_redirect/mg.json
D states_redirect/mk.json
D states_redirect/ml.json
D states_redirect/nan.json
D states_redirect/nb.json
D states_redirect/nds-nl.json
D states_redirect/nl.json
D states_redirect/pl.json
D states_redirect/pms.json
D states_redirect/pt-br.json
D states_redirect/pt.json
D states_redirect/qqq.json
D states_redirect/ro.json
D states_redirect/ru.json
D states_redirect/sk.json
D states_redirect/sr.json
D states_redirect/sv.json
D states_redirect/th.json
D states_redirect/tr.json
D states_redirect/uk.json
D states_redirect/vi.json
D states_redirect/zh.json
D table2wiki/ar.json
D table2wiki/ast.json
D table2wiki/az.json
D table2wiki/azb.json
D table2wiki/ba.json
D table2wiki/bcc.json
D table2wiki/be-tarask.json
D table2wiki/bn.json
D table2wiki/br.json
D table2wiki/bs.json
D table2wiki/ca.json
D table2wiki/ce.json
D table2wiki/ckb.json
D table2wiki/cs.json
D table2wiki/csb.json
D table2wiki/da.json
D table2wiki/de.json
D table2wiki/diq.json
D table2wiki/el.json
D table2wiki/en.json
D table2wiki/eo.json
D table2wiki/es.json
D table2wiki/eu.json
D table2wiki/fa.json
D table2wiki/fi.json
D table2wiki/fr.json
D table2wiki/frp.json
D table2wiki/frr.json
D table2wiki/gl.json
D table2wiki/gsw.json
D table2wiki/hak.json
D table2wiki/haw.json
D table2wiki/he.json
D table2wiki/hi.json
D table2wiki/hu.json
D table2wiki/ia.json
D table2wiki/id.json
D table2wiki/ilo.json
D table2wiki/is.json
D table2wiki/it.json
D table2wiki/ja.json
D table2wiki/jv.json
D table2wiki/kk.json
D table2wiki/ko.json
D table2wiki/ksh.json
D table2wiki/lb.json
D table2wiki/lt.json
D table2wiki/map-bms.json
D table2wiki/mg.json
D table2wiki/mk.json
D table2wiki/ml.json
D table2wiki/ms.json
D table2wiki/nan.json
D table2wiki/nb.json
D table2wiki/nds-nl.json
D table2wiki/ne.json
D table2wiki/nl.json
D table2wiki/oc.json
D table2wiki/pl.json
D table2wiki/pms.json
D table2wiki/pt-br.json
D table2wiki/pt.json
D table2wiki/qqq.json
D table2wiki/ro.json
D table2wiki/ru.json
D table2wiki/sco.json
D table2wiki/sh.json
D table2wiki/sk.json
D table2wiki/sl.json
D table2wiki/so.json
D table2wiki/sr.json
D table2wiki/su.json
D table2wiki/sv.json
D table2wiki/th.json
D table2wiki/tl.json
D table2wiki/tly.json
D table2wiki/tr.json
D table2wiki/tt.json
D table2wiki/uk.json
D table2wiki/vec.json
D table2wiki/vi.json
D table2wiki/zh.json
D thirdparty/aeb.json
D thirdparty/af.json
D thirdparty/am.json
D thirdparty/ar.json
D thirdparty/arc.json
D thirdparty/ast.json
D thirdparty/az.json
D thirdparty/azb.json
D thirdparty/ba.json
D thirdparty/bbc-latn.json
D thirdparty/bbc.json
D thirdparty/bcc.json
D thirdparty/be-tarask.json
D thirdparty/be.json
D thirdparty/bg.json
D thirdparty/bgn.json
D thirdparty/bjn.json
D thirdparty/blk.json
D thirdparty/bn.json
D thirdparty/br.json
D thirdparty/bs.json
D thirdparty/ca.json
D thirdparty/ce.json
D thirdparty/ckb.json
D thirdparty/co.json
D thirdparty/cs.json
D thirdparty/csb.json
D thirdparty/cu.json
D thirdparty/cv.json
D thirdparty/cy.json
D thirdparty/da.json
D thirdparty/de.json
D thirdparty/diq.json
D thirdparty/dty.json
D thirdparty/el.json
D thirdparty/eml.json
D thirdparty/en.json
D thirdparty/eo.json
D thirdparty/es.json
D thirdparty/et.json
D thirdparty/eu.json
D thirdparty/fa.json
D thirdparty/fi.json
D thirdparty/fo.json
D thirdparty/fr.json
D thirdparty/frp.json
D thirdparty/frr.json
D thirdparty/fy.json
D thirdparty/gl.json
D thirdparty/gsw.json
D thirdparty/gu.json
D thirdparty/hak.json
D thirdparty/haw.json
D thirdparty/he.json
D thirdparty/hi.json
D thirdparty/hr.json
D thirdparty/hsb.json
D thirdparty/hu.json
D thirdparty/hy.json
D thirdparty/ia.json
D thirdparty/id.json
D thirdparty/ilo.json
D thirdparty/io.json
D thirdparty/is.json
D thirdparty/it.json
D thirdparty/ja.json
D thirdparty/jv.json
D thirdparty/ka.json
D thirdparty/kab.json
D thirdparty/kjp.json
D thirdparty/kk.json
D thirdparty/km.json
D thirdparty/kn.json
D thirdparty/ko.json
D thirdparty/ksh.json
D thirdparty/ku.json
D thirdparty/ky.json
D thirdparty/lb.json
D thirdparty/lez.json
D thirdparty/lrc.json
D thirdparty/lt.json
D thirdparty/lus.json
D thirdparty/lv.json
D thirdparty/lzz.json
D thirdparty/map-bms.json
D thirdparty/mg.json
D thirdparty/mk.json
D thirdparty/ml.json
D thirdparty/mni.json
D thirdparty/mnw.json
D thirdparty/mr.json
D thirdparty/ms.json
D thirdparty/mt.json
D thirdparty/my.json
D thirdparty/nan.json
D thirdparty/nb.json
D thirdparty/nds-nl.json
D thirdparty/ne.json
D thirdparty/new.json
D thirdparty/nl.json
D thirdparty/oc.json
D thirdparty/olo.json
D thirdparty/or.json
D thirdparty/pa.json
D thirdparty/pdc.json
D thirdparty/pl.json
D thirdparty/pms.json
D thirdparty/ps.json
D thirdparty/pt-br.json
D thirdparty/pt.json
D thirdparty/qqq.json
D thirdparty/ro.json
D thirdparty/ru.json
D thirdparty/sa.json
D thirdparty/scn.json
D thirdparty/sco.json
D thirdparty/sd.json
D thirdparty/sh.json
D thirdparty/shy-latn.json
D thirdparty/si.json
D thirdparty/sk.json
D thirdparty/skr-arab.json
D thirdparty/sl.json
D thirdparty/sms.json
D thirdparty/so.json
D thirdparty/sq.json
D thirdparty/sr.json
D thirdparty/su.json
D thirdparty/sv.json
D thirdparty/sw.json
D thirdparty/ta.json
D thirdparty/te.json
D thirdparty/tg.json
D thirdparty/th.json
D thirdparty/tl.json
D thirdparty/tly.json
D thirdparty/tr.json
D thirdparty/tt.json
D thirdparty/tzm.json
D thirdparty/uk.json
D thirdparty/ur.json
D thirdparty/uz.json
D thirdparty/vec.json
D thirdparty/vi.json
D thirdparty/vo.json
D thirdparty/wa.json
D thirdparty/yi.json
D thirdparty/yo.json
D thirdparty/zh.json
D unlink/af.json
D unlink/ar.json
D unlink/ast.json
D unlink/az.json
D unlink/azb.json
D unlink/ba.json
D unlink/bcc.json
D unlink/be-tarask.json
D unlink/br.json
D unlink/bs.json
D unlink/ca.json
D unlink/ce.json
D unlink/ckb.json
D unlink/cs.json
D unlink/csb.json
D unlink/cy.json
D unlink/da.json
D unlink/de.json
D unlink/diq.json
D unlink/el.json
D unlink/en.json
D unlink/eo.json
D unlink/es.json
D unlink/et.json
D unlink/eu.json
D unlink/fa.json
D unlink/fi.json
D unlink/fo.json
D unlink/fr.json
D unlink/frp.json
D unlink/frr.json
D unlink/gl.json
D unlink/haw.json
D unlink/he.json
D unlink/hi.json
D unlink/hu.json
D unlink/ia.json
D unlink/id.json
D unlink/ilo.json
D unlink/is.json
D unlink/it.json
D unlink/ja.json
D unlink/jv.json
D unlink/kab.json
D unlink/kk.json
D unlink/ko.json
D unlink/ksh.json
D unlink/ky.json
D unlink/lb.json
D unlink/li.json
D unlink/lt.json
D unlink/lus.json
D unlink/map-bms.json
D unlink/mg.json
D unlink/mk.json
D unlink/ml.json
D unlink/ms.json
D unlink/nan.json
D unlink/nb.json
D unlink/nds-nl.json
D unlink/ne.json
D unlink/nl.json
D unlink/oc.json
D unlink/pl.json
D unlink/pms.json
D unlink/pt-br.json
D unlink/pt.json
D unlink/qqq.json
D unlink/ro.json
D unlink/ru.json
D unlink/sco.json
D unlink/sh.json
D unlink/si.json
D unlink/sk.json
D unlink/sl.json
D unlink/so.json
D unlink/sq.json
D unlink/sr.json
D unlink/su.json
D unlink/sv.json
D unlink/sw.json
D unlink/ta.json
D unlink/th.json
D unlink/tl.json
D unlink/tly.json
D unlink/tr.json
D unlink/tt.json
D unlink/uk.json
D unlink/ur.json
D unlink/uz.json
D unlink/vec.json
D unlink/vi.json
D unlink/zh.json
1,870 files changed, 0 insertions(+), 17,723 deletions(-)
Approvals:
Xqt: Looks good to me, approved
jenkins-bot: Verified
--
To view, visit https://gerrit.wikimedia.org/r/c/pywikibot/i18n/+/778222
To unsubscribe, or for help writing mail filters, visit https://gerrit.wikimedia.org/r/settings
Gerrit-Project: pywikibot/i18n
Gerrit-Branch: master
Gerrit-Change-Id: I4e13cc8494d1e15dd804b7f6822c2e7b61d00eec
Gerrit-Change-Number: 778222
Gerrit-PatchSet: 1
Gerrit-Owner: Xqt <info(a)gno.de>
Gerrit-Reviewer: Xqt <info(a)gno.de>
Gerrit-Reviewer: jenkins-bot
Gerrit-MessageType: merged