Xqt has submitted this change and it was merged. ( https://gerrit.wikimedia.org/r/403404 )
Change subject: [doc] Show currently supported python versions
......................................................................
[doc] Show currently supported python versions
Change-Id: Ic93bb50a4d48a6831c31ef9a3d1823855cc1284f
---
M pwb.py
M setup.py
2 files changed, 10 insertions(+), 10 deletions(-)
Approvals:
Dvorapa: Looks good to me, but someone else must approve
Dalba: Looks good to me, approved
jenkins-bot: Verified
diff --git a/pwb.py b/pwb.py
index f4ff16d..2340367 100755
--- a/pwb.py
+++ b/pwb.py
@@ -9,7 +9,7 @@
and it will use the package directory to store all user files, will fix up
search paths so the package does not need to be installed, etc.
"""
-# (C) Pywikibot team, 2015-2016
+# (C) Pywikibot team, 2015-2018
#
# Distributed under the terms of the MIT license.
#
@@ -33,10 +33,10 @@
PY26 = (PYTHON_VERSION < (2, 7))
versions_required_message = """
-Pywikibot not available on:
-%s
+Pywikibot is not available on:
+{version}
-Pywikibot is only supported under Python 2.6.5+, 2.7.2+ or 3.3+
+This version of Pywikibot only supports Python 2.6.5+, 2.7.2+ or 3.3+.
"""
@@ -49,7 +49,7 @@
if not python_is_supported():
- print(versions_required_message % sys.version)
+ print(versions_required_message.format(version=sys.version))
sys.exit(1)
pwb = None
diff --git a/setup.py b/setup.py
index 495d3f2..284bb80 100644
--- a/setup.py
+++ b/setup.py
@@ -1,7 +1,7 @@
# -*- coding: utf-8 -*-
"""Installer script for Pywikibot 3.0 framework."""
#
-# (C) Pywikibot team, 2009-2017
+# (C) Pywikibot team, 2009-2018
#
# Distributed under the terms of the MIT license.
#
@@ -26,10 +26,10 @@
PY26 = (PYTHON_VERSION < (2, 7))
versions_required_message = """
-Pywikibot not available on:
-%s
+Pywikibot is not available on:
+{version}
-Pywikibot is only supported under Python 2.6.5+, 2.7.2+ or 3.3+
+This version of Pywikibot only supports Python 2.6.5+, 2.7.2+ or 3.3+.
"""
@@ -42,7 +42,7 @@
if not python_is_supported():
- raise RuntimeError(versions_required_message % sys.version)
+ raise RuntimeError(versions_required_message.format(version=sys.version))
test_deps = ['bz2file', 'mock']
--
To view, visit https://gerrit.wikimedia.org/r/403404
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings
Gerrit-MessageType: merged
Gerrit-Change-Id: Ic93bb50a4d48a6831c31ef9a3d1823855cc1284f
Gerrit-PatchSet: 8
Gerrit-Project: pywikibot/core
Gerrit-Branch: master
Gerrit-Owner: Xqt <info(a)gno.de>
Gerrit-Reviewer: Dalba <dalba.wiki(a)gmail.com>
Gerrit-Reviewer: Dvorapa <dvorapa(a)seznam.cz>
Gerrit-Reviewer: John Vandenberg <jayvdb(a)gmail.com>
Gerrit-Reviewer: Lokal Profil <lokal.profil(a)gmail.com>
Gerrit-Reviewer: Xqt <info(a)gno.de>
Gerrit-Reviewer: Zoranzoki21 <zorandori4444(a)gmail.com>
Gerrit-Reviewer: jenkins-bot <>
jenkins-bot has submitted this change and it was merged. ( https://gerrit.wikimedia.org/r/407590 )
Change subject: diff_checker.py: Decode tokenizer strings using 'utf-8' encoding on Python 2
......................................................................
diff_checker.py: Decode tokenizer strings using 'utf-8' encoding on Python 2
Apparently the tokenizer on Python3 has an internal mechanism to detect the
right encoding and returns unicode objects.[1] But the tokenizer on Python 2
returns byte-strings which need to be explicitly decoded, otherwise the
default encoding (sometimes 'ascii') is used that causes UnicodeDecodeError.
[1] See:
https://docs.python.org/3/library/tokenize.html#tokenize.detect_encoding
Bug: T186301
Change-Id: I029ae20145bb634c72e2f7f24b8c749d5885fb25
---
M scripts/maintenance/diff_checker.py
1 file changed, 4 insertions(+), 0 deletions(-)
Approvals:
jenkins-bot: Verified
Xqt: Looks good to me, approved
diff --git a/scripts/maintenance/diff_checker.py b/scripts/maintenance/diff_checker.py
index 734befd..a055143 100644
--- a/scripts/maintenance/diff_checker.py
+++ b/scripts/maintenance/diff_checker.py
@@ -30,8 +30,10 @@
from subprocess import check_output
from sys import version_info
if version_info.major == 3:
+ PY2 = False
from tokenize import tokenize, STRING
else:
+ PY2 = True
from tokenize import generate_tokens as tokenize, STRING
from unidiff import PatchSet
@@ -72,6 +74,8 @@
break
if start[0] not in line_nos or type_ != STRING:
continue
+ if PY2:
+ string = string.decode('utf-8')
match = STRING_MATCH(string)
if match.group('unicode_literal'):
error = True
--
To view, visit https://gerrit.wikimedia.org/r/407590
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings
Gerrit-MessageType: merged
Gerrit-Change-Id: I029ae20145bb634c72e2f7f24b8c749d5885fb25
Gerrit-PatchSet: 4
Gerrit-Project: pywikibot/core
Gerrit-Branch: master
Gerrit-Owner: Dalba <dalba.wiki(a)gmail.com>
Gerrit-Reviewer: Dalba <dalba.wiki(a)gmail.com>
Gerrit-Reviewer: Xqt <info(a)gno.de>
Gerrit-Reviewer: jenkins-bot <>
jenkins-bot has submitted this change and it was merged. ( https://gerrit.wikimedia.org/r/406405 )
Change subject: [PEP8] Keep lines below 80 chars
......................................................................
[PEP8] Keep lines below 80 chars
Change-Id: I1c2755c7c41ddfb9300cfcebf9c8ad58133647af
---
M pywikibot/__init__.py
M pywikibot/config2.py
M pywikibot/cosmetic_changes.py
M pywikibot/exceptions.py
M pywikibot/family.py
M pywikibot/fixes.py
M pywikibot/textlib.py
M scripts/archivebot.py
M scripts/basic.py
M scripts/category.py
10 files changed, 304 insertions(+), 246 deletions(-)
Approvals:
Dalba: Looks good to me, approved
jenkins-bot: Verified
diff --git a/pywikibot/__init__.py b/pywikibot/__init__.py
index 508f8ba..342e372 100644
--- a/pywikibot/__init__.py
+++ b/pywikibot/__init__.py
@@ -372,8 +372,8 @@
u"""
Return the precision of the geo coordinate.
- The precision is calculated if the Coordinate does not have a precision,
- and self._dim is set.
+ The precision is calculated if the Coordinate does not have a
+ precision, and self._dim is set.
When no precision and no self._dim exists, None is returned.
@@ -385,13 +385,15 @@
In small angle approximation (and thus in radians):
- M{Δλ ≈ Δpos / r_φ}, where r_φ is the radius of earth at the given latitude.
+ M{Δλ ≈ Δpos / r_φ}, where r_φ is the radius of earth at the given
+ latitude.
Δλ is the error in longitude.
M{r_φ = r cos φ}, where r is the radius of earth, φ the latitude
Therefore::
- precision = math.degrees(self._dim/(radius*math.cos(math.radians(self.lat))))
+ precision = math.degrees(
+ self._dim/(radius*math.cos(math.radians(self.lat))))
@rtype: float or None
"""
@@ -408,19 +410,24 @@
self._precision = value
def precisionToDim(self):
- """Convert precision from Wikibase to GeoData's dim and return the latter.
+ """
+ Convert precision from Wikibase to GeoData's dim and return the latter.
- dim is calculated if the Coordinate doesn't have a dimension, and precision is set.
- When neither dim nor precision are set, ValueError is thrown.
+ dim is calculated if the Coordinate doesn't have a dimension, and
+ precision is set. When neither dim nor precision are set, ValueError
+ is thrown.
Carrying on from the earlier derivation of precision, since
- precision = math.degrees(dim/(radius*math.cos(math.radians(self.lat)))), we get
- dim = math.radians(precision)*radius*math.cos(math.radians(self.lat))
- But this is not valid, since it returns a float value for dim which is an integer.
- We must round it off to the nearest integer.
+ precision = math.degrees(dim/(radius*math.cos(math.radians(self.lat))))
+ we get:
+ dim = math.radians(
+ precision)*radius*math.cos(math.radians(self.lat))
+ But this is not valid, since it returns a float value for dim which is
+ an integer. We must round it off to the nearest integer.
Therefore::
- dim = int(round(math.radians(precision)*radius*math.cos(math.radians(self.lat))))
+ dim = int(round(math.radians(
+ precision)*radius*math.cos(math.radians(self.lat))))
@rtype: int or None
"""
@@ -430,7 +437,8 @@
radius = 6378137
self._dim = int(
round(
- math.radians(self._precision) * radius * math.cos(math.radians(self.lat))
+ math.radians(self._precision) * radius * math.cos(
+ math.radians(self.lat))
)
)
return self._dim
@@ -496,11 +504,15 @@
readable string, e.g., 'hour'. If no precision is given, it is set
according to the given time units.
- Timezone information is given in three different ways depending on the time:
- * Times after the implementation of UTC (1972): as an offset from UTC in minutes;
- * Times before the implementation of UTC: the offset of the time zone from universal time;
- * Before the implementation of time zones: The longitude of the place of
- the event, in the range −180° to 180°, multiplied by 4 to convert to minutes.
+ Timezone information is given in three different ways depending on the
+ time:
+ * Times after the implementation of UTC (1972): as an offset from UTC
+ in minutes;
+ * Times before the implementation of UTC: the offset of the time zone
+ from universal time;
+ * Before the implementation of time zones: The longitude of the place
+ of the event, in the range −180° to 180°, multiplied by 4 to convert
+ to minutes.
@param year: The year as a signed integer of between 1 and 16 digits.
@type year: long
@@ -516,11 +528,11 @@
@type second: int
@param precision: The unit of the precision of the time.
@type precision: int or str
- @param before: Number of units after the given time it could be, if uncertain.
- The unit is given by the precision.
+ @param before: Number of units after the given time it could be, if
+ uncertain. The unit is given by the precision.
@type before: int
- @param after: Number of units before the given time it could be, if uncertain.
- The unit is given by the precision.
+ @param after: Number of units before the given time it could be, if
+ uncertain. The unit is given by the precision.
@type after: int
@param timezone: Timezone information in minutes.
@type timezone: int
@@ -583,18 +595,19 @@
The timestamp differs from ISO 8601 in that:
* The year is always signed and having between 1 and 16 digits;
* The month, day and time are zero if they are unknown;
- * The Z is discarded since time zone is determined from the timezone param.
+ * The Z is discarded since time zone is determined from the timezone
+ param.
@param datetimestr: Timestamp in a format resembling ISO 8601,
e.g. +2013-01-01T00:00:00Z
@type datetimestr: str
@param precision: The unit of the precision of the time.
@type precision: int or str
- @param before: Number of units after the given time it could be, if uncertain.
- The unit is given by the precision.
+ @param before: Number of units after the given time it could be, if
+ uncertain. The unit is given by the precision.
@type before: int
- @param after: Number of units before the given time it could be, if uncertain.
- The unit is given by the precision.
+ @param after: Number of units before the given time it could be, if
+ uncertain. The unit is given by the precision.
@type after: int
@param timezone: Timezone information in minutes.
@type timezone: int
@@ -623,11 +636,11 @@
@type timestamp: pywikibot.Timestamp
@param precision: The unit of the precision of the time.
@type precision: int or str
- @param before: Number of units after the given time it could be, if uncertain.
- The unit is given by the precision.
+ @param before: Number of units after the given time it could be, if
+ uncertain. The unit is given by the precision.
@type before: int
- @param after: Number of units before the given time it could be, if uncertain.
- The unit is given by the precision.
+ @param after: Number of units before the given time it could be, if
+ uncertain. The unit is given by the precision.
@type after: int
@param timezone: Timezone information in minutes.
@type timezone: int
@@ -668,7 +681,8 @@
@return: Timestamp
@rtype: pywikibot.Timestamp
- @raises ValueError: instance value can not be represented using Timestamp
+ @raises ValueError: instance value can not be represented using
+ Timestamp
"""
if self.year <= 0:
raise ValueError('You cannot turn BC dates into a Timestamp')
@@ -716,7 +730,7 @@
@staticmethod
def _require_errors(site):
"""
- Check if the Wikibase site is so old it requires error bounds to be given.
+ Check if Wikibase site is so old it requires error bounds to be given.
If no site item is supplied it raises a warning and returns True.
@@ -729,7 +743,8 @@
"WbQuantity now expects a 'site' parameter. This is needed to "
"ensure correct handling of error bounds.")
return False
- return MediaWikiVersion(site.version()) < MediaWikiVersion('1.29.0-wmf.2')
+ return MediaWikiVersion(
+ site.version()) < MediaWikiVersion('1.29.0-wmf.2')
@staticmethod
def _todecimal(value):
@@ -768,14 +783,14 @@
Create a new WbQuantity object.
@param amount: number representing this quantity
- @type amount: string or Decimal. Other types are accepted, and converted
- via str to Decimal.
+ @type amount: string or Decimal. Other types are accepted, and
+ converted via str to Decimal.
@param unit: the Wikibase item for the unit or the entity URI of this
- Wikibase item.
+ Wikibase item.
@type unit: pywikibot.ItemPage, str or None
@param error: the uncertainty of the amount (e.g. ±1)
- @type error: same as amount, or tuple of two values, where the first value is
- the upper error and the second is the lower error value.
+ @type error: same as amount, or tuple of two values, where the first
+ value is the upper error and the second is the lower error value.
@param site: The Wikibase site
@type site: pywikibot.site.DataSite
"""
@@ -909,7 +924,7 @@
@classmethod
def fromWikibase(cls, wb):
"""
- Create a WbMonolingualText from the JSON data given by the Wikibase API.
+ Create a WbMonolingualText from the JSON data given by Wikibase API.
@param wb: Wikibase JSON
@type wb: dict
@@ -1320,8 +1335,8 @@
"""
Drop this process from the throttle log, after pending threads finish.
- Wait for the page-putter to flush its queue. Also drop this process from the
- throttle log. Called automatically at Python exit.
+ Wait for the page-putter to flush its queue. Also drop this process from
+ the throttle log. Called automatically at Python exit.
"""
_logger = "wiki"
diff --git a/pywikibot/config2.py b/pywikibot/config2.py
index 161ee00..a575afc 100644
--- a/pywikibot/config2.py
+++ b/pywikibot/config2.py
@@ -33,7 +33,7 @@
"""
#
# (C) Rob W.W. Hooft, 2003
-# (C) Pywikibot team, 2003-2017
+# (C) Pywikibot team, 2003-2018
#
# Distributed under the terms of the MIT license.
#
@@ -358,9 +358,11 @@
if __no_user_config != '2':
output(exc_text)
else:
- exc_text += " Please check that user-config.py is stored in the correct location.\n"
- exc_text += " Directory where user-config.py is searched is determined as follows:\n\n"
- exc_text += " " + get_base_dir.__doc__
+ exc_text += (
+ ' Please check that user-config.py is stored in the correct '
+ 'location.\n'
+ ' Directory where user-config.py is searched is determined '
+ 'as follows:\n\n ') + get_base_dir.__doc__
raise RuntimeError(exc_text)
return base_dir
@@ -391,7 +393,8 @@
for file_name in os.listdir(folder_path):
if file_name.endswith("_family.py"):
family_name = file_name[:-len("_family.py")]
- register_family_file(family_name, os.path.join(folder_path, file_name))
+ register_family_file(family_name, os.path.join(folder_path,
+ file_name))
# Get the names of all known families, and initialize with empty dictionaries.
@@ -928,7 +931,8 @@
def _win32_extension_command(extension):
"""Get the command from the Win32 registry for an extension."""
- fileexts_key = r'Software\Microsoft\Windows\CurrentVersion\Explorer\FileExts'
+ fileexts_key = \
+ r'Software\Microsoft\Windows\CurrentVersion\Explorer\FileExts'
key_name = fileexts_key + r'\.' + extension + r'\OpenWithProgids'
_winreg = winreg # exists for git blame only; do not use
try:
@@ -946,8 +950,8 @@
return cmd[:-1].strip()
except WindowsError as e:
# Catch any key lookup errors
- output('Unable to detect program for file extension "{0}": {1!r}'.format(
- extension, e))
+ output('Unable to detect program for file extension "{0}": {1!r}'
+ .format(extension, e))
def _detect_win32_editor():
diff --git a/pywikibot/cosmetic_changes.py b/pywikibot/cosmetic_changes.py
index a3dee44..94af12a 100755
--- a/pywikibot/cosmetic_changes.py
+++ b/pywikibot/cosmetic_changes.py
@@ -5,7 +5,9 @@
The changes are not supposed to change the look of the rendered wiki page.
-If you wish to run this as an stand-alone script, use scripts/cosmetic_changes.py
+If you wish to run this as an stand-alone script, use:
+
+ scripts/cosmetic_changes.py
For regular use, it is recommended to put this line into your user-config.py:
@@ -46,11 +48,12 @@
or by adding a list to the given one:
- cosmetic_changes_deny_script += ['your_script_name_1', 'your_script_name_2']
+ cosmetic_changes_deny_script += ['your_script_name_1',
+ 'your_script_name_2']
"""
#
-# (C) xqt, 2009-2016
-# (C) Pywikibot team, 2006-2017
+# (C) xqt, 2009-2018
+# (C) Pywikibot team, 2006-2018
#
# Distributed under the terms of the MIT license.
#
@@ -205,7 +208,8 @@
try:
self.namespace = self.site.namespaces.resolve(namespace).pop(0)
except (KeyError, TypeError, IndexError):
- raise ValueError('%s needs a valid namespace' % self.__class__.__name__)
+ raise ValueError('{0} needs a valid namespace'
+ .format(self.__class__.__name__))
self.template = (self.namespace == 10)
self.talkpage = self.namespace >= 0 and self.namespace % 2 == 1
self.title = pageTitle
@@ -269,7 +273,8 @@
new_text = self._change(text)
except Exception as e:
if self.ignore == CANCEL_PAGE:
- pywikibot.warning(u'Skipped "{0}", because an error occurred.'.format(self.title))
+ pywikibot.warning('Skipped "{0}", because an error occurred.'
+ .format(self.title))
pywikibot.exception(e)
return False
else:
@@ -317,7 +322,7 @@
self.site.code not in ('et', 'it', 'bg', 'ru'):
categories = textlib.getCategoryLinks(text, site=self.site)
- if not self.talkpage: # and pywikibot.calledModuleName() <> 'interwiki':
+ if not self.talkpage:
subpage = False
if self.template:
loc = None
@@ -340,11 +345,6 @@
# e.g. using categories.sort()
# TODO: Taking main cats to top
- # for name in categories:
- # if (re.search(u"(.+?)\|(.{,1}?)",name.title()) or
- # name.title() == name.title().split(":")[0] + title):
- # categories.remove(name)
- # categories.insert(0, name)
text = textlib.replaceCategoryLinks(text, categories,
site=self.site)
# Adding the interwiki
@@ -373,8 +373,8 @@
namespaces = list(namespace)
thisNs = namespaces.pop(0)
if namespace.id == 6 and family.name == 'wikipedia':
- if self.site.code in ('en', 'fr') and \
- MediaWikiVersion(self.site.version()) >= MediaWikiVersion('1.14'):
+ if self.site.code in ('en', 'fr') and MediaWikiVersion(
+ self.site.version()) >= MediaWikiVersion('1.14'):
# do not change "Image" on en-wiki and fr-wiki
assert u'Image' in namespaces
namespaces.remove(u'Image')
@@ -615,11 +615,12 @@
def removeUselessSpaces(self, text):
"""Cleanup multiple or trailing spaces."""
- exceptions = ['comment', 'math', 'nowiki', 'pre', 'startspace', 'table']
+ exceptions = ['comment', 'math', 'nowiki', 'pre', 'startspace',
+ 'table']
if self.site.sitename != 'wikipedia:cs':
exceptions.append('template')
- text = textlib.replaceExcept(text, r'(?m)[\t ]+( |$)', r'\1', exceptions,
- site=self.site)
+ text = textlib.replaceExcept(text, r'(?m)[\t ]+( |$)', r'\1',
+ exceptions, site=self.site)
return text
def removeNonBreakingSpaceBeforePercent(self, text):
@@ -658,15 +659,16 @@
Add a space between the * or # and the text.
NOTE: This space is recommended in the syntax help on the English,
- German, and French Wikipedia. It might be that it is not wanted on other
- wikis. If there are any complaints, please file a bug report.
+ German, and French Wikipedia. It might be that it is not wanted on
+ other wikis. If there are any complaints, please file a bug report.
"""
if not self.template:
- exceptions = ['comment', 'math', 'nowiki', 'pre', 'source', 'template',
- 'timeline', self.site.redirectRegex()]
+ exceptions = ['comment', 'math', 'nowiki', 'pre', 'source',
+ 'template', 'timeline', self.site.redirectRegex()]
text = textlib.replaceExcept(
text,
- r'(?m)^(?P<bullet>[:;]*(\*+|#+)[:;\*#]*)(?P<char>[^\s\*#:;].+?)',
+ r'(?m)'
+ r'^(?P<bullet>[:;]*(\*+|#+)[:;\*#]*)(?P<char>[^\s\*#:;].+?)',
r'\g<bullet> \g<char>',
exceptions)
return text
@@ -797,7 +799,8 @@
def fixReferences(self, text):
"""Fix references tags."""
- # See also https://en.wikipedia.org/wiki/User:AnomieBOT/source/tasks/OrphanReferenceFi…
+ # See also
+ # https://en.wikipedia.org/wiki/User:AnomieBOT/source/tasks/OrphanReferenceFi…
exceptions = ['nowiki', 'comment', 'math', 'pre', 'source',
'startspace']
@@ -825,7 +828,8 @@
def fixTypo(self, text):
"""Fix units."""
exceptions = ['nowiki', 'comment', 'math', 'pre', 'source',
- 'startspace', 'gallery', 'hyperlink', 'interwiki', 'link']
+ 'startspace', 'gallery', 'hyperlink', 'interwiki',
+ 'link']
# change <number> ccm -> <number> cm³
text = textlib.replaceExcept(text, r'(\d)\s*(?: )?ccm',
r'\1 cm³', exceptions,
@@ -835,7 +839,8 @@
pattern = re.compile(u'«.*?»', re.UNICODE)
exceptions.append(pattern)
text = textlib.replaceExcept(text, r'(\d)\s*(?: )?[º°]([CF])',
- r'\1 °\2', exceptions, site=self.site)
+ r'\1 °\2', exceptions,
+ site=self.site)
text = textlib.replaceExcept(text, u'º([CF])', u'°' + r'\1',
exceptions,
site=self.site)
@@ -874,7 +879,8 @@
# not to let bot edits in latin content
exceptions.append(re.compile(u"[^%(fa)s] *?\"*? *?, *?[^%(fa)s]"
% {'fa': faChrs}))
- text = textlib.replaceExcept(text, ',', '،', exceptions, site=self.site)
+ text = textlib.replaceExcept(text, ',', '،', exceptions,
+ site=self.site)
if self.site.code == 'ckb':
text = textlib.replaceExcept(text,
'\u0647([.\u060c_<\\]\\s])',
@@ -915,7 +921,8 @@
It is working according to [1] and works only on pages in the file
namespace on the Wikimedia Commons.
- [1]: https://commons.wikimedia.org/wiki/Commons:Tools/pywiki_file_description_cl…
+ [1]:
+ https://commons.wikimedia.org/wiki/Commons:Tools/pywiki_file_description_cl…
"""
if self.site.sitename != 'commons:commons' or self.namespace == 6:
return
@@ -932,14 +939,16 @@
r"\1== {{int:license-header}} ==", exceptions, True)
text = textlib.replaceExcept(
text,
- r"([\r\n])\=\= *(Licensing|License information|{{int:license}}) *\=\=",
+ r'([\r\n])'
+ r'\=\= *(Licensing|License information|{{int:license}}) *\=\=',
r"\1== {{int:license-header}} ==", exceptions, True)
# frequent field values to {{int:}} versions
text = textlib.replaceExcept(
text,
r'([\r\n]\|[Ss]ource *\= *)'
- r'(?:[Oo]wn work by uploader|[Oo]wn work|[Ee]igene [Aa]rbeit) *([\r\n])',
+ r'(?:[Oo]wn work by uploader|[Oo]wn work|[Ee]igene [Aa]rbeit) *'
+ r'([\r\n])',
r'\1{{own}}\2', exceptions, True)
text = textlib.replaceExcept(
text,
@@ -960,7 +969,8 @@
# duplicated section headers
text = textlib.replaceExcept(
text,
- r'([\r\n]|^)\=\= *{{int:filedesc}} *\=\=(?:[\r\n ]*)\=\= *{{int:filedesc}} *\=\=',
+ r'([\r\n]|^)\=\= *{{int:filedesc}} *\=\=(?:[\r\n ]*)\=\= *'
+ r'{{int:filedesc}} *\=\=',
r'\1== {{int:filedesc}} ==', exceptions, True)
text = textlib.replaceExcept(
text,
diff --git a/pywikibot/exceptions.py b/pywikibot/exceptions.py
index 2e456fb..bc1e8f3 100644
--- a/pywikibot/exceptions.py
+++ b/pywikibot/exceptions.py
@@ -80,7 +80,7 @@
- FamilyMaintenanceWarning: missing information in family definition
"""
#
-# (C) Pywikibot team, 2008-2017
+# (C) Pywikibot team, 2008-2018
#
# Distributed under the terms of the MIT license.
#
@@ -442,7 +442,8 @@
"""Page already exists."""
- message = u"Destination article %s already exists and is not a redirect to the source article"
+ message = ('Destination article %s already exists and is not a redirect '
+ 'to the source article')
pass
@@ -451,7 +452,8 @@
"""Page save failed because MediaWiki detected a blacklisted spam URL."""
- message = "Edit to page %(title)s rejected by spam filter due to content:\n%(url)s"
+ message = ('Edit to page %(title)s rejected by spam filter due to '
+ 'content:\n%(url)s')
def __init__(self, page, url):
"""Constructor."""
diff --git a/pywikibot/family.py b/pywikibot/family.py
index 0dae080..a4c49f0 100644
--- a/pywikibot/family.py
+++ b/pywikibot/family.py
@@ -170,14 +170,16 @@
'crh': u'[a-zâçğıñöşüа-яё“»]*',
'cs': u'[a-záčďéěíňóřšťúůýž]*',
'csb': u'[a-zęóąśłżźćńĘÓĄŚŁŻŹĆŃ]*',
- 'cu': u'[a-zабвгдеєжѕзїіıићклмнопсстѹфхѡѿцчшщъыьѣюѥѧѩѫѭѯѱѳѷѵґѓђёјйљњќуўџэ҄я“»]*',
+ 'cu': ('[a-zабвгдеєжѕзїіıићклмнопсстѹфхѡѿцчшщъыьѣюѥѧѩѫѭѯѱѳѷѵґѓђё'
+ 'јйљњќуўџэ҄я“»]*'),
'cv': u'[a-zа-яĕçăӳ"»]*',
'cy': u'[àáâèéêìíîïòóôûŵŷa-z]*',
'da': u'[a-zæøå]*',
'de': u'[a-zäöüß]*',
'din': '[äëɛɛ̈éɣïŋöɔɔ̈óa-z]*',
'dsb': u'[äöüßa-z]*',
- 'el': u'[a-zαβγδεζηθικλμνξοπρστυφχψωςΑΒΓΔΕΖΗΘΙΚΛΜΝΞΟΠΡΣΤΥΦΧΨΩάέήίόύώϊϋΐΰΆΈΉΊΌΎΏΪΫ]*',
+ 'el': ('[a-zαβγδεζηθικλμνξοπρστυφχψωςΑΒΓΔΕΖΗΘΙΚΛΜΝΞΟΠΡΣΤΥΦΧΨΩάέή'
+ 'ίόύώϊϋΐΰΆΈΉΊΌΎΏΪΫ]*'),
'eml': u'[a-zàéèíîìóòúù]*',
'es': u'[a-záéíóúñ]*',
'eu': u'[a-záéíóúñ]*',
@@ -209,7 +211,8 @@
'it': u'[a-zàéèíîìóòúù]*',
'ka': u'[a-zაბგდევზთიკლმნოპჟრსტუფქღყშჩცძწჭხჯჰ“»]*',
'kbp': '[a-zàâçéèêîôûäëïöüùÇÉÂÊÎÔÛÄËÏÖÜÀÈÙ]*',
- 'kk': u'[a-zäçéğıïñöşüýʺʹа-яёәғіқңөұүһٴابپتجحدرزسشعفقكلمنڭەوۇۋۆىيچھ“»]*',
+ 'kk': ('[a-zäçéğıïñöşüýʺʹа-яёәғіқңөұүһ'
+ 'ٴابپتجحدرزسشعفقكلمنڭەوۇۋۆىيچھ“»]*'),
'kl': u'[a-zæøå]*',
'koi': u'[a-zабвгдеёжзийклмнопрстуфхцчшщъыьэюя]*',
'krc': u'[a-zабвгдеёжзийклмнопрстуфхцчшщъыьэюя]*',
@@ -251,7 +254,8 @@
'oc': u'[a-zàâçéèêîôû]*',
'olo': '[a-zčČšŠžŽäÄöÖ]*',
'or': u'[a-z-]*',
- 'pa': u'[ਁਂਃਅਆਇਈਉਊਏਐਓਔਕਖਗਘਙਚਛਜਝਞਟਠਡਢਣਤਥਦਧਨਪਫਬਭਮਯਰਲਲ਼ਵਸ਼ਸਹ਼ਾਿੀੁੂੇੈੋੌ੍ਖ਼ਗ਼ਜ਼ੜਫ਼ੰੱੲੳa-z]*',
+ 'pa': ('[ਁਂਃਅਆਇਈਉਊਏਐਓਔਕਖਗਘਙਚਛਜਝਞਟਠਡਢਣਤਥਦਧਨਪਫਬਭਮਯਰਲਲ਼ਵਸ਼ਸਹ਼ਾ'
+ 'ਿੀੁੂੇੈੋੌ੍ਖ਼ਗ਼ਜ਼ੜਫ਼ੰੱੲੳa-z]*'),
'pcd': u'[a-zàâçéèêîôûäëïöüùÇÉÂÊÎÔÛÄËÏÖÜÀÈÙ]*',
'pdc': u'[äöüßa-z]*',
'pfl': u'[äöüßa-z]*',
@@ -274,7 +278,8 @@
'sh': u'[a-zčćđžš]*',
'sk': u'[a-záäčďéíľĺňóôŕšťúýž]*',
'sl': u'[a-zčćđžš]*',
- 'sr': u'[abvgdđežzijklljmnnjoprstćufhcčdžšабвгдђежзијклљмнњопрстћуфхцчџш]*',
+ 'sr': ('[abvgdđežzijklljmnnjoprstćufhcčdžšабвгдђежзијклљмнњопрстћу'
+ 'фхцчџш]*'),
'srn': u'[a-zäöüïëéèà]*',
'stq': u'[äöüßa-z]*',
'sv': u'[a-zåäöéÅÄÖÉ]*',
@@ -715,7 +720,7 @@
'_default': []
}
- # A list of languages that use hard (instead of soft) category redirects
+ # A list of languages that use hard (not soft) category redirects
self.use_hard_category_redirects = []
# A list of disambiguation template names in different languages
@@ -851,10 +856,11 @@
'nrm', 'nv', 'ny', 'oc', 'om', 'pag', 'pam', 'pap', 'pcd',
'pdc', 'pfl', 'pih', 'pl', 'pms', 'pt', 'qu', 'rm', 'rn', 'ro',
'roa-rup', 'roa-tara', 'rw', 'sc', 'scn', 'sco', 'se', 'sg',
- 'simple', 'sk', 'sl', 'sm', 'sn', 'so', 'sq', 'srn', 'ss', 'st',
- 'stq', 'su', 'sv', 'sw', 'szl', 'tet', 'tl', 'tn', 'to', 'tpi',
- 'tr', 'ts', 'tum', 'tw', 'ty', 'uz', 've', 'vec', 'vi', 'vls',
- 'vo', 'wa', 'war', 'wo', 'xh', 'yo', 'zea', 'zh-min-nan', 'zu',
+ 'simple', 'sk', 'sl', 'sm', 'sn', 'so', 'sq', 'srn', 'ss',
+ 'st', 'stq', 'su', 'sv', 'sw', 'szl', 'tet', 'tl', 'tn', 'to',
+ 'tpi', 'tr', 'ts', 'tum', 'tw', 'ty', 'uz', 've', 'vec', 'vi',
+ 'vls', 'vo', 'wa', 'war', 'wo', 'xh', 'yo', 'zea',
+ 'zh-min-nan', 'zu',
# languages using multiple scripts, including latin
'az', 'chr', 'ckb', 'ha', 'iu', 'kk', 'ku', 'rmy', 'sh', 'sr',
'tt', 'ug', 'za'
@@ -1123,8 +1129,8 @@
@param code: The site code
@param uri: The absolute path after the hostname
- @param protocol: The protocol which is used. If None it'll determine the
- protocol from the code.
+ @param protocol: The protocol which is used. If None it'll determine
+ the protocol from the code.
@return: The full URL
@rtype: str
"""
@@ -1633,7 +1639,8 @@
@property
def domain(self):
"""Domain property."""
- if self.name in self.multi_language_content_families + self.other_content_families:
+ if self.name in (self.multi_language_content_families
+ + self.other_content_families):
return self.name + '.org'
elif self.name in self.wikimedia_org_families:
return 'wikimedia.org'
diff --git a/pywikibot/fixes.py b/pywikibot/fixes.py
index e821969..3a68c67 100644
--- a/pywikibot/fixes.py
+++ b/pywikibot/fixes.py
@@ -1,7 +1,7 @@
# -*- coding: utf-8 -*-
"""File containing all standard fixes."""
#
-# (C) Pywikibot team, 2008-2017
+# (C) Pywikibot team, 2008-2018
#
# Distributed under the terms of the MIT license.
#
@@ -59,18 +59,23 @@
(r'(?i)<em>(.*?)</em>', r"''\1''"),
# horizontal line without attributes in a single line
(r'(?i)([\r\n])<hr[ /]*>([\r\n])', r'\1----\2'),
- # horizontal line without attributes with more text in the same line
+ # horizontal line without attributes with more text in same line
# (r'(?i) +<hr[ /]*> +', r'\r\n----\r\n'),
# horizontal line with attributes; can't be done with wiki syntax
# so we only make it XHTML compliant
(r'(?i)<hr ([^>/]+?)>', r'<hr \1 />'),
# a header where only spaces are in the same line
- (r'(?i)([\r\n]) *<h1> *([^<]+?) *</h1> *([\r\n])', r"\1= \2 =\3"),
- (r'(?i)([\r\n]) *<h2> *([^<]+?) *</h2> *([\r\n])', r"\1== \2 ==\3"),
- (r'(?i)([\r\n]) *<h3> *([^<]+?) *</h3> *([\r\n])', r"\1=== \2 ===\3"),
- (r'(?i)([\r\n]) *<h4> *([^<]+?) *</h4> *([\r\n])', r"\1==== \2 ====\3"),
- (r'(?i)([\r\n]) *<h5> *([^<]+?) *</h5> *([\r\n])', r"\1===== \2 =====\3"),
- (r'(?i)([\r\n]) *<h6> *([^<]+?) *</h6> *([\r\n])', r"\1====== \2 ======\3"),
+ (r'(?i)([\r\n]) *<h1> *([^<]+?) *</h1> *([\r\n])', r'\1= \2 =\3'),
+ (r'(?i)([\r\n]) *<h2> *([^<]+?) *</h2> *([\r\n])',
+ r'\1== \2 ==\3'),
+ (r'(?i)([\r\n]) *<h3> *([^<]+?) *</h3> *([\r\n])',
+ r'\1=== \2 ===\3'),
+ (r'(?i)([\r\n]) *<h4> *([^<]+?) *</h4> *([\r\n])',
+ r'\1==== \2 ====\3'),
+ (r'(?i)([\r\n]) *<h5> *([^<]+?) *</h5> *([\r\n])',
+ r'\1===== \2 =====\3'),
+ (r'(?i)([\r\n]) *<h6> *([^<]+?) *</h6> *([\r\n])',
+ r'\1====== \2 ======\3'),
# TODO: maybe we can make the bot replace <p> tags with \r\n's.
],
'exceptions': {
@@ -102,12 +107,14 @@
# zusammengesetztes Wort, Bindestrich wird durchgeschleift
(r'(?<!\w)(\d+|\d+[.,]\d+)(\$|€|DM|£|¥|mg|g|kg|ml|cl|l|t|ms|min'
r'|µm|mm|cm|dm|m|km|ha|°C|kB|MB|GB|TB|W|kW|MW|GW|PS|Nm|eV|kcal'
- r'|mA|mV|kV|Ω|Hz|kHz|MHz|GHz|mol|Pa|Bq|Sv|mSv)([²³]?-[\w\[])', r'\1-\2\3'),
+ r'|mA|mV|kV|Ω|Hz|kHz|MHz|GHz|mol|Pa|Bq|Sv|mSv)([²³]?-[\w\[])',
+ r'\1-\2\3'),
# Größenangabe ohne Leerzeichen vor Einheit
# weggelassen wegen vieler falsch Positiver: s, A, V, C, S, J, %
(r'(?<!\w)(\d+|\d+[.,]\d+)(\$|€|DM|£|¥|mg|g|kg|ml|cl|l|t|ms|min'
r'|µm|mm|cm|dm|m|km|ha|°C|kB|MB|GB|TB|W|kW|MW|GW|PS|Nm|eV|kcal'
- r'|mA|mV|kV|Ω|Hz|kHz|MHz|GHz|mol|Pa|Bq|Sv|mSv)(?=\W|²|³|$)', r'\1 \2'),
+ r'|mA|mV|kV|Ω|Hz|kHz|MHz|GHz|mol|Pa|Bq|Sv|mSv)(?=\W|²|³|$)',
+ r'\1 \2'),
# Temperaturangabe mit falsch gesetztem Leerzeichen
(r'(?<!\w)(\d+|\d+[.,]\d+)° C(?=\W|²|³|$)', r'\1 °C'),
# Kein Leerzeichen nach Komma
@@ -119,7 +126,8 @@
# https://de.wikipedia.org/wiki/Plenk#Franz.C3.B6sische_Sprache
# Leerzeichen vor Doppelpunkt/Semikolon kann korrekt sein,
# z.B. nach Quellenangaben
- (r'([a-zäöüß](\]\])?) ([,.!?]) ((\[\[)?[a-zäöüA-ZÄÖÜ])', r'\1\3 \4'),
+ (r'([a-zäöüß](\]\])?) ([,.!?]) ((\[\[)?[a-zäöüA-ZÄÖÜ])',
+ r'\1\3 \4'),
# (u'([a-z]\.)([A-Z])', r'\1 \2'),
],
'exceptions': {
@@ -152,9 +160,11 @@
r'DOS/4GW', # Software
r'ntfs-3g', # Dateisystem-Treiber
r'/\w(,\w)*/', # Laut-Aufzählung in der Linguistik
- # Variablen in der Mathematik (unklar, ob Leerzeichen hier Pflicht sind)
+ # Variablen in der Mathematik
+ # (unklar, ob Leerzeichen hier Pflicht sind)
r'[xyz](,[xyz])+',
- # Definitionslisten, dort gibt es oft absichtlich Leerzeichen vor Doppelpunkten
+ # Definitionslisten, dort gibt es oft absichtlich Leerzeichen
+ # vor Doppelpunkten
r'(?m)^;(.*?)$',
r'\d+h( | )\d+m',
# Schreibweise für Zeiten, vor allem in Film-Infoboxen.
@@ -162,7 +172,8 @@
r'(?i)\[\[(Bild|Image|Media):.+?\|', # Dateinamen auslassen
r'{{bgc\|.*?}}', # Hintergrundfarbe
r'<sup>\d+m</sup>', # bei chemischen Formeln
- r'\([A-Z][A-Za-z]*(,[A-Z][A-Za-z]*(<sup>.*?</sup>|<sub>.*?</sub>|))+\)'
+ r'\([A-Z][A-Za-z]*(,[A-Z][A-Za-z]*'
+ r'(<sup>.*?</sup>|<sub>.*?</sub>|))+\)'
# chemische Formel, z. B. AuPb(Pb,Sb,Bi)Te.
# Hier sollen keine Leerzeichen hinter die Kommata.
],
@@ -251,7 +262,8 @@
# dash in external link, where the correct end of the URL can
# be detected from the file extension. It is very unlikely that
# this will cause mistakes.
- (r'\[(?P<url>https?://[^\|\] ]+?(\.pdf|\.html|\.htm|\.php|\.asp|\.aspx|\.jsp)) *\|'
+ (r'\[(?P<url>https?://[^\|\] ]+?'
+ r'(\.pdf|\.html|\.htm|\.php|\.asp|\.aspx|\.jsp)) *\|'
r' *(?P<label>[^\|\]]+?)\]', r'[\g<url> \g<label>]'),
],
'exceptions': {
@@ -311,7 +323,8 @@
},
'replacements': [
# Bindestrich, Gedankenstrich, Geviertstrich
- (r'(von \d{3,4}) *(-|–|–|—|—) *(\d{3,4})', r'\1 bis \3'),
+ (r'(von \d{3,4}) *(-|–|–|—|—) *(\d{3,4})',
+ r'\1 bis \3'),
],
},
@@ -355,8 +368,10 @@
# (u'†\[\[(\d)', u'† [[\\1'),
# (u'†\[\[(\d)', u'† [[\\1'),
(r'\[\[(\d+\. (?:Januar|Februar|März|April|Mai|Juni|Juli|August|'
- r'September|Oktober|November|Dezember)) (\d{1,4})\]\]', r'[[\1]] [[\2]]'),
- # Keine führende Null beim Datum (erst einmal nur bei fehlenden Leerzeichen)
+ r'September|Oktober|November|Dezember)) (\d{1,4})\]\]',
+ r'[[\1]] [[\2]]'),
+ # Keine führende Null beim Datum
+ # (erst einmal nur bei fehlenden Leerzeichen)
(r'0(\d+)\.(Januar|Februar|März|April|Mai|Juni|Juli|August|'
r'September|Oktober|November|Dezember)', r'\1. \2'),
# Kein Leerzeichen zwischen Tag und Monat
@@ -386,9 +401,11 @@
# Remove colon between the word ISBN and the number
(r'ISBN: (\d+)', r'ISBN \1'),
# superfluous word "number"
- (r'ISBN(?: [Nn]umber| [Nn]o\.?|-Nummer|-Nr\.):? (\d+)', r'ISBN \1'),
+ (r'ISBN(?: [Nn]umber| [Nn]o\.?|-Nummer|-Nr\.):? (\d+)',
+ r'ISBN \1'),
# Space, minus, dot, hypen, en dash, em dash, etc. instead of
- # hyphen-minus as separator, or spaces between digits and separators.
+ # hyphen-minus as separator,
+ # or spaces between digits and separators.
# Note that these regular expressions also match valid ISBNs, but
# these won't be changed.
# These two regexes don't verify that the ISBN is of a valid format
@@ -399,7 +416,8 @@
r'*[\- −.‐-―] *(\d+) *[\- −.‐-―] *(\d)(?!\d)',
r'ISBN \1-\2-\3-\4-\5'), # ISBN-13
- (r'ISBN (\d+) *[\- −.‐-―] *(\d+) *[\- −.‐-―] *(\d+) *[\- −.‐-―] *(\d|X|x)(?!\d)',
+ (r'ISBN (\d+) *[\- −.‐-―] *(\d+) *[\- −.‐-―] *(\d+) *'
+ r'[\- −.‐-―] *(\d|X|x)(?!\d)',
r'ISBN \1-\2-\3-\4'), # ISBN-10
# missing space before ISBN-10 or before ISBN-13,
# or multiple spaces or non-breaking space.
@@ -568,8 +586,8 @@
(u'Special:Whatlinkshere', u'Special:WhatLinksHere'),
],
},
- # yu top-level domain will soon be disabled,
- # see http://lists.wikimedia.org/pipermail/wikibots-l/2009-February/000290.html
+ # yu top-level domain will soon be disabled, see
+ # http://lists.wikimedia.org/pipermail/wikibots-l/2009-February/000290.html
# The following are domains that are often-used.
'yu-tld': {
'regex': False,
@@ -578,7 +596,8 @@
'de': u'Bot: Ersetze Links auf .yu-Domains',
'en': u'Robot: Replacing links to .yu domains',
'fa': u'ربات: جایگزینی پیوندها به دامنهها با پسوند yu',
- 'fr': u'Robot: Correction des liens pointant vers le domaine .yu, qui expire en 2009',
+ 'fr': ('Robot: Correction des liens pointant vers le domaine '
+ '.yu, qui expire en 2009'),
'ksh': u'Bot: de ahle .yu-Domains loufe us, dröm ußjetuusch',
},
'replacements': [
@@ -592,7 +611,8 @@
(u'eunet.yu', u'eunet.rs'),
(u'www.zastava-arms.co.yu', u'www.zastava-arms.co.rs'),
(u'www.airportnis.co.yu', u'www.airportnis.rs'),
- # (u'www.danas.co.yu', u'www.danas.rs'), # Archive links don't seem to work
+ # Archive links don't seem to work
+ # (u'www.danas.co.yu', u'www.danas.rs'),
(u'www.belex.co.yu', u'www.belex.rs'),
(u'beograd.org.yu', u'beograd.rs'),
(u'www.vlada.cg.yu', u'www.vlada.me'),
diff --git a/pywikibot/textlib.py b/pywikibot/textlib.py
index 4ced2bc..a946186 100644
--- a/pywikibot/textlib.py
+++ b/pywikibot/textlib.py
@@ -134,8 +134,8 @@
"""
Change Latin digits based on language to localized version.
- Be aware that this function only works for several languages,
- and that it returns an unchanged string if an unsupported language is given.
+ Be aware that this function only works for several languages, and that it
+ returns an unchanged string if an unsupported language is given.
@param phrase: The phrase to convert to localized numerical
@param lang: language code
@@ -245,7 +245,8 @@
list(site.family.obsolete.keys()))),
# Wikibase property inclusions
'property': (r'(?i)\{\{\s*\#(?:%s):\s*p\d+.*?\}\}',
- lambda site: '|'.join(site.getmagicwords('property'))),
+ lambda site: '|'.join(
+ site.getmagicwords('property'))),
# Module invocations (currently only Lua)
'invoke': (r'(?is)\{\{\s*\#(?:%s):.*?\}\}',
lambda site: '|'.join(site.getmagicwords('invoke'))),
@@ -384,11 +385,7 @@
# We cannot just insert the new string, as it may contain regex
# group references such as \2 or \g<name>.
# On the other hand, this approach does not work because it
- # can't handle lookahead or lookbehind (see bug T123185):
- #
- # replacement = old.sub(new, text[match.start():match.end()])
- # text = text[:match.start()] + replacement + text[match.end():]
-
+ # can't handle lookahead or lookbehind (see bug T123185).
# So we have to process the group references manually.
replacement = ''
@@ -418,7 +415,7 @@
else:
index = match.start() + len(replacement)
if not match.group():
- # When the regex allows to match nothing, shift by one character
+ # When the regex allows to match nothing, shift by one char
index += 1
markerpos = match.start() + len(replacement)
replaced += 1
@@ -473,7 +470,8 @@
"""
# try to merge with 'removeDisabledParts()' above into one generic function
- # thanks to https://www.hellboundhackers.org/articles/read-article.php?article_id=841
+ # thanks to:
+ # https://www.hellboundhackers.org/articles/read-article.php?article_id=841
parser = _GetDataHTML()
parser.keeptags = keeptags
parser.feed(text)
@@ -500,7 +498,7 @@
def isDisabled(text, index, tags=['*']):
"""
- Return True if text[index] is disabled, e.g. by a comment or by nowiki tags.
+ Return True if text[index] is disabled, e.g. by a comment or nowiki tags.
For the tags parameter, see L{removeDisabledParts}.
"""
@@ -535,8 +533,8 @@
@param separator: the separator string allowed before the marker. If empty
it won't include whitespace too.
@type separator: str
- @return: the marker with the separator and whitespace from the text in front
- of it. It'll be just the marker if the separator is empty.
+ @return: the marker with the separator and whitespace from the text in
+ front of it. It'll be just the marker if the separator is empty.
@rtype: str
"""
# set to remove any number of separator occurrences plus arbitrary
@@ -567,9 +565,9 @@
The text is searched for a link and on each link it replaces the text
depending on the result for that link. If the result is just None it skips
- that link. When it's False it unlinks it and just inserts the label. When it
- is a Link instance it'll use the target, section and label from that Link
- instance. If it's a Page instance it'll use just the target from the
+ that link. When it's False it unlinks it and just inserts the label. When
+ it is a Link instance it'll use the target, section and label from that
+ Link instance. If it's a Page instance it'll use just the target from the
replacement and the section and label from the original link.
If it's a string and the replacement was a sequence it converts it into a
@@ -589,13 +587,13 @@
allows for user interaction. The groups are a dict containing 'title',
'section', 'label' and 'linktrail' and the rng are the start and end
position of the link. The 'label' in groups contains everything after
- the first pipe which might contain additional data which is used in File
- namespace for example.
+ the first pipe which might contain additional data which is used in
+ File namespace for example.
Alternatively it can be a sequence containing two items where the first
- must be a Link or Page and the second has almost the same meaning as the
- result by the callable. It'll convert that into a callable where the
- first item (the Link or Page) has to be equal to the found link and in
- that case it will apply the second value from the sequence.
+ must be a Link or Page and the second has almost the same meaning as
+ the result by the callable. It'll convert that into a callable where
+ the first item (the Link or Page) has to be equal to the found link and
+ in that case it will apply the second value from the sequence.
@type replace: sequence of pywikibot.Page/pywikibot.Link/str or
callable
@param site: a Site object to use if replace is not a sequence or the link
@@ -733,8 +731,8 @@
curpos = rng[0] + len(replacement)
continue
elif isinstance(replacement, bytes):
- raise ValueError('The result must be unicode (str in Python 3) and '
- 'not bytes (str in Python 2).')
+ raise ValueError('The result must be unicode (str in Python 3) '
+ 'and not bytes (str in Python 2).')
# Verify that it's either Link, Page or basestring
check_replacement_class(replacement)
@@ -771,11 +769,13 @@
parsed_link_title = title_section(parsed_link_text)
replacement_title = title_section(replacement)
# compare title, but only with parts if linktrail works
- if not linktrail.sub('', parsed_link_title[len(replacement_title):]):
+ if not linktrail.sub('',
+ parsed_link_title[len(replacement_title):]):
# TODO: This must also compare everything that was used as a
# prefix (in case insensitive)
- must_piped = (not parsed_link_title.startswith(replacement_title) or
- parsed_link_text.namespace != replacement.namespace)
+ must_piped = (
+ not parsed_link_title.startswith(replacement_title)
+ or parsed_link_text.namespace != replacement.namespace)
if must_piped:
newlink = '[[{0}|{1}]]'.format(new_page_title, link_text)
@@ -1159,7 +1159,8 @@
catNamespace = '|'.join(site.namespaces.CATEGORY)
categoryR = re.compile(r'\[\[\s*(%s)\s*:.*?\]\]\s*' % catNamespace, re.I)
text = replaceExcept(text, categoryR, '',
- ['nowiki', 'comment', 'math', 'pre', 'source', 'includeonly'],
+ ['nowiki', 'comment', 'math', 'pre', 'source',
+ 'includeonly'],
marker=marker,
site=site)
if marker:
@@ -1224,7 +1225,7 @@
if newcat is None:
# First go through and try the more restrictive regex that removes
# an entire line, if the category is the only thing on that line (this
- # prevents blank lines left over in category lists following a removal.)
+ # prevents blank lines left over in category lists following a removal)
text = replaceExcept(oldtext, categoryRN, '',
exceptions, site=site)
text = replaceExcept(text, categoryR, '',
@@ -1237,8 +1238,9 @@
exceptions, site=site)
else:
text = replaceExcept(oldtext, categoryR,
- '[[%s:%s\\2' % (site.namespace(14),
- newcat.title(withNamespace=False)),
+ '[[{0}:{1}\\2'
+ .format(site.namespace(14),
+ newcat.title(withNamespace=False)),
exceptions, site=site)
return text
@@ -1254,8 +1256,8 @@
@type new: iterable
@param site: The site that the text is from.
@type site: pywikibot.Site
- @param addOnly: If addOnly is True, the old category won't be deleted and the
- category(s) given will be added (and so they won't replace anything).
+ @param addOnly: If addOnly is True, the old category won't be deleted and
+ the category(s) given will be added (and they won't replace anything).
@type addOnly: bool
@return: The modified text.
@rtype: str
@@ -1268,8 +1270,9 @@
pywikibot.error(
'The Pywikibot is no longer allowed to touch categories on the '
'German\nWikipedia on pages that contain the Personendaten '
- 'template because of the\nnon-standard placement of that template.\n'
- 'See https://de.wikipedia.org/wiki/Hilfe:Personendaten#Kopiervorlage')
+ 'template because of the\nnon-standard placement of that template.'
+ '\nSee https://de.wikipedia.org/wiki/Hilfe:Personendaten'
+ '#Kopiervorlage')
return oldtext
separator = site.family.category_text_separator
iseparator = site.family.interwiki_text_separator
@@ -1334,7 +1337,8 @@
if isinstance(category, basestring):
category, separator, sortKey = category.strip('[]').partition('|')
sortKey = sortKey if separator else None
- prefix = category.split(":", 1)[0] # whole word if no ":" is present
+ # whole word if no ":" is present
+ prefix = category.split(':', 1)[0]
if prefix not in insite.namespaces[14]:
category = u'{0}:{1}'.format(insite.namespace(14), category)
category = pywikibot.Category(pywikibot.Link(category,
@@ -1380,7 +1384,8 @@
# .'' shouldn't be considered as part of the link.
regex = r'(?P<url>http[s]?://[^%(notInside)s]*?[^%(notAtEnd)s]' \
r'(?=[%(notAtEnd)s]*\'\')|http[s]?://[^%(notInside)s]*' \
- r'[^%(notAtEnd)s])' % {'notInside': notInside, 'notAtEnd': notAtEnd}
+ r'[^%(notAtEnd)s])' % {'notInside': notInside,
+ 'notAtEnd': notAtEnd}
if withoutBracketed:
regex = r'(?<!\[)' + regex
@@ -1593,31 +1598,6 @@
# {{#if: }}
if not name or name.startswith('#'):
continue
-
-# TODO: implement the following; 'self' and site dont exist in this function
-# if self.site().isInterwikiLink(name):
-# continue
-# # {{DEFAULTSORT:...}}
-# from pywikibot.tools import MediaWikiVersion
-# defaultKeys = MediaWikiVersion(self.site.version()) > MediaWikiVersion("1.13") and \
-# self.site().getmagicwords('defaultsort')
-# # It seems some wikis does not have this magic key
-# if defaultKeys:
-# found = False
-# for key in defaultKeys:
-# if name.startswith(key):
-# found = True
-# break
-# if found: continue
-#
-# try:
-# name = Page(self.site(), name).title()
-# except InvalidTitle:
-# if name:
-# output(
-# u"Page %s contains invalid template name {{%s}}."
-# % (self.title(), name.strip()))
-# continue
# Parameters
paramString = m.group('params')
@@ -1938,7 +1918,8 @@
self.groups = ['year', 'month', 'hour', 'time', 'day', 'minute',
'tzinfo']
- timeR = r'(?P<time>(?P<hour>([0-1]\d|2[0-3]))[:\.h](?P<minute>[0-5]\d))'
+ timeR = (r'(?P<time>(?P<hour>([0-1]\d|2[0-3]))[:\.h]'
+ r'(?P<minute>[0-5]\d))')
timeznR = r'\((?P<tzinfo>[A-Z]+)\)'
yearR = r'(?P<year>(19|20)\d\d)(?:%s)?' % u'\ub144'
# if months have 'digits' as names, they need to be
@@ -1955,7 +1936,9 @@
self.is_digit_month = True
monthR = r'(?P<month>(%s)|(?:1[012]|0?[1-9])\.)' \
% u'|'.join(escaped_months)
- dayR = r'(?P<day>(3[01]|[12]\d|0?[1-9]))(?:%s)?\.?\s*(?:[01]?\d\.)?' % u'\uc77c'
+ dayR = (
+ r'(?P<day>(3[01]|[12]\d|0?[1-9]))(?:{0})?\.?\s*(?:[01]?\d\.)?'
+ .format('\uc77c'))
else:
self.is_digit_month = False
monthR = r'(?P<month>(%s))' % u'|'.join(escaped_months)
@@ -2089,8 +2072,8 @@
# rightmost one.
most_recent = []
for comment in self._comment_pat.finditer(line):
- # Recursion levels can be maximum two. If a comment is found, it will
- # not for sure be found in the next level.
+ # Recursion levels can be maximum two. If a comment is found, it
+ # will not for sure be found in the next level.
# Nested comments are excluded by design.
timestamp = self.timestripper(comment.group(1))
most_recent.append(timestamp)
@@ -2142,7 +2125,7 @@
try:
value = self.origNames2monthNum[dateDict['month']['value']]
except KeyError:
- pywikibot.output(u'incorrect month name "%s" in page in site %s'
+ pywikibot.output('incorrect month name "%s" in page in site %s'
% (dateDict['month']['value'], self.site))
raise KeyError
else:
@@ -2155,8 +2138,9 @@
try:
dateDict[k] = int(v['value'])
except ValueError:
- raise ValueError('Value: %s could not be converted for key: %s.'
- % (v['value'], k))
+ raise ValueError(
+ 'Value: {0} could not be converted for key: {1}.'
+ .format(v['value'], k))
# find timezone
dateDict['tzinfo'] = self.tzinfo
diff --git a/scripts/archivebot.py b/scripts/archivebot.py
index 930a8d9..56ff18b 100755
--- a/scripts/archivebot.py
+++ b/scripts/archivebot.py
@@ -88,8 +88,8 @@
"""
#
# (C) Misza13, 2006-2010
-# (C) xqt, 2009-2016
-# (C) Pywikibot team, 2007-2017
+# (C) xqt, 2009-2018
+# (C) Pywikibot team, 2007-2018
#
# Distributed under the terms of the MIT license.
#
@@ -164,7 +164,7 @@
Localise a shorthand duration.
Translates a duration written in the shorthand notation (ex. "24h", "7d")
- into an expression in the local language of the wiki ("24 hours", "7 days").
+ into an expression in the local wiki language ("24 hours", "7 days").
"""
key, duration = checkstr(string)
template = site.mediawiki_message(MW_KEYS[key])
@@ -594,7 +594,8 @@
def load_config(self):
"""Load and validate archiver template."""
- pywikibot.output(u'Looking for: {{%s}} in %s' % (self.tpl.title(), self.page))
+ pywikibot.output('Looking for: {{%s}} in %s' % (self.tpl.title(),
+ self.page))
for tpl in self.page.templatesWithParams():
if tpl[0] == pywikibot.Page(self.site, self.tpl.title(), ns=10):
for param in tpl[1]:
@@ -616,14 +617,15 @@
Also checks for security violations.
"""
title = archive.title()
+ page_title = self.page.title()
if not title:
return
- if not self.force \
- and not self.page.title() + '/' == title[:len(self.page.title()) + 1] \
- and not self.key_ok():
+ if not (self.force
+ or page_title + '/' == title[:len(page_title) + 1]
+ or self.key_ok()):
raise ArchiveSecurityError(
u"Archive page %s does not start with page title (%s)!"
- % (archive, self.page.title()))
+ % (archive, page_title))
if title not in self.archives:
self.archives[title] = DiscussionPage(archive, self, params)
return self.archives[title].feed_thread(thread, max_archive_size)
@@ -650,15 +652,20 @@
params = {
'counter': to_local_digits(arch_counter, lang),
'year': to_local_digits(t.timestamp.year, lang),
- 'isoyear': to_local_digits(t.timestamp.isocalendar()[0], lang),
- 'isoweek': to_local_digits(t.timestamp.isocalendar()[1], lang),
+ 'isoyear': to_local_digits(t.timestamp.isocalendar()[0],
+ lang),
+ 'isoweek': to_local_digits(t.timestamp.isocalendar()[1],
+ lang),
'quarter': to_local_digits(
int(ceil(float(t.timestamp.month) / 3)), lang),
'month': to_local_digits(t.timestamp.month, lang),
- 'monthname': self.month_num2orig_names[t.timestamp.month]['long'],
- 'monthnameshort': self.month_num2orig_names[t.timestamp.month]['short'],
+ 'monthname': self.month_num2orig_names[
+ t.timestamp.month]['long'],
+ 'monthnameshort': self.month_num2orig_names[
+ t.timestamp.month]['short'],
'week': to_local_digits(
- int(time.strftime('%W', t.timestamp.timetuple())), lang),
+ int(time.strftime('%W',
+ t.timestamp.timetuple())), lang),
}
archive = pywikibot.Page(self.site, archive % params)
if self.feed_archive(archive, t, max_arch_size, params):
@@ -685,16 +692,19 @@
if whys:
# Search for the marker template
rx = re.compile(r'\{\{%s\s*?\n.*?\n\}\}'
- % (template_title_regex(self.tpl).pattern), re.DOTALL)
+ % (template_title_regex(self.tpl).pattern),
+ re.DOTALL)
if not rx.search(self.page.header):
raise MalformedConfigError(
"Couldn't find the template in the header"
)
- pywikibot.output(u'Archiving %d thread(s).' % self.archived_threads)
+ pywikibot.output('Archiving {0} thread(s).'
+ .format(self.archived_threads))
# Save the archives first (so that bugs don't cause a loss of data)
for a in sorted(self.archives.keys()):
- self.comment_params['count'] = self.archives[a].archived_threads
+ self.comment_params['count'] = self.archives[
+ a].archived_threads
comment = i18n.twtranslate(self.site.code,
'archivebot-archive-summary',
self.comment_params)
@@ -771,12 +781,15 @@
if page.exists():
calc = page.title()
else:
- pywikibot.output(u'NOTE: the specified page "%s" does not (yet) exist.' % calc)
+ pywikibot.output(
+ 'NOTE: the specified page "{0}" does not (yet) exist.'
+ .format(calc))
pywikibot.output('key = %s' % calc_md5_hexdigest(calc, salt))
return
if not args:
- pywikibot.bot.suggest_help(additional_text='No template was specified.')
+ pywikibot.bot.suggest_help(
+ additional_text='No template was specified.')
return False
for a in args:
@@ -807,7 +820,7 @@
pywikibot.error('Missing or malformed template in page %s: %s'
% (pg, e))
except Exception:
- pywikibot.error(u'Error occurred while processing page %s' % pg)
+ pywikibot.error('Error occurred while processing page %s' % pg)
pywikibot.exception(tb=True)
diff --git a/scripts/basic.py b/scripts/basic.py
index 2112953..b532e2a 100755
--- a/scripts/basic.py
+++ b/scripts/basic.py
@@ -25,7 +25,7 @@
-summary: Set the action summary message for the edit.
"""
#
-# (C) Pywikibot team, 2006-2017
+# (C) Pywikibot team, 2006-2018
#
# Distributed under the terms of the MIT license.
#
@@ -58,10 +58,10 @@
"""
An incomplete sample bot.
- @ivar summary_key: Edit summary message key. The message that should be used
- is placed on /i18n subdirectory. The file containing these messages
- should have the same name as the caller script (i.e. basic.py in this
- case). Use summary_key to set a default edit summary message.
+ @ivar summary_key: Edit summary message key. The message that should be
+ used is placed on /i18n subdirectory. The file containing these
+ messages should have the same name as the caller script (i.e. basic.py
+ in this case). Use summary_key to set a default edit summary message.
@type summary_key: str
"""
diff --git a/scripts/category.py b/scripts/category.py
index ea53852..bedd4c9 100755
--- a/scripts/category.py
+++ b/scripts/category.py
@@ -109,7 +109,7 @@
# (C) leogregianin, 2004-2008
# (C) Ben McIlwain (CydeWeys), 2006-2015
# (C) Anreas J Schwab, 2007
-# (C) xqt, 2009-2016
+# (C) xqt, 2009-2018
# (C) Pywikibot team, 2008-2018
#
# Distributed under the terms of the MIT license.
@@ -415,9 +415,11 @@
if not comment:
comment = i18n.twtranslate(self.current_page.site,
'category-adding',
- {'newcat': catpl.title(withNamespace=False)})
+ {'newcat': catpl.title(
+ withNamespace=False)})
try:
- self.userPut(self.current_page, old_text, text, summary=comment)
+ self.userPut(self.current_page, old_text, text,
+ summary=comment)
except pywikibot.PageSaveRelatedError as error:
pywikibot.output(u'Page %s not saved: %s'
% (self.current_page.title(asLink=True),
@@ -484,7 +486,8 @@
page don't exist.
"""
self.site = pywikibot.Site()
- self.can_move_cats = ('move-categorypages' in self.site.userinfo['rights'])
+ self.can_move_cats = (
+ 'move-categorypages' in self.site.userinfo['rights'])
# Create attributes for the categories and their talk pages.
self.oldcat = self._makecat(oldcat)
self.oldtalk = self.oldcat.toggleTalkPage()
@@ -512,9 +515,9 @@
repo = self.site.data_repository()
if self.wikibase and repo.username() is None:
# The bot can't move categories nor update the Wikibase repo
- raise pywikibot.NoUsername(u"The 'wikibase' option is turned on"
- u" and %s has no registered username."
- % repo)
+ raise pywikibot.NoUsername(
+ "The 'wikibase' option is turned on and {0} has no "
+ 'registered username.'.format(repo))
template_vars = {'oldcat': self.oldcat.title(withNamespace=False)}
if self.newcat:
@@ -552,8 +555,8 @@
template_vars)
else:
# Category is deleted.
- self.deletion_comment = i18n.twtranslate(self.site,
- 'category-was-disbanded')
+ self.deletion_comment = i18n.twtranslate(
+ self.site, 'category-was-disbanded')
self.move_comment = move_comment if move_comment else self.comment
def run(self):
@@ -650,15 +653,17 @@
inPlace=self.inplace,
sortKey=self.keep_sortkey)
- # Categories for templates can be included in <includeonly> section
- # of Template:Page/doc subpage.
- # TODO: doc page for a template can be Anypage/doc, as specified in
+ # Categories for templates can be included in <includeonly>
+ # section of Template:Page/doc subpage.
+ # TODO: doc page for a template can be Anypage/doc, as
+ # specified in
# {{Template:Documentation}} -> not managed here
# TODO: decide if/how to enable/disable this feature
if page.namespace() == 10:
docs = page.site.doc_subpage # return tuple
for doc in docs:
- doc_page = pywikibot.Page(page.site, page.title() + doc)
+ doc_page = pywikibot.Page(page.site,
+ page.title() + doc)
template_docs.add(doc_page)
for doc_page in pagegenerators.PreloadingGenerator(template_docs):
@@ -704,7 +709,8 @@
comma = self.site.mediawiki_message('comma-separator')
authors = comma.join(self.oldcat.contributingUsers())
template_vars = {'oldcat': self.oldcat.title(), 'authors': authors}
- summary = i18n.twtranslate(self.site, 'category-renamed', template_vars)
+ summary = i18n.twtranslate(self.site, 'category-renamed',
+ template_vars)
self.newcat.text = self.oldcat.text
self._strip_cfd_templates(summary)
@@ -804,9 +810,10 @@
"""
@deprecated('CategoryMoveRobot.__init__()')
- def __init__(self, catTitle, batchMode=False, editSummary='',
- useSummaryForDeletion=CategoryMoveRobot.DELETION_COMMENT_AUTOMATIC,
- titleRegex=None, inPlace=False, pagesonly=False):
+ def __init__(
+ self, catTitle, batchMode=False, editSummary='',
+ useSummaryForDeletion=CategoryMoveRobot.DELETION_COMMENT_AUTOMATIC,
+ titleRegex=None, inPlace=False, pagesonly=False):
"""Constructor."""
super(CategoryRemoveRobot, self).__init__(
oldcat=catTitle,
@@ -876,13 +883,13 @@
"""Script to help by moving articles of the category into subcategories.
Specify the category name on the command line. The program will pick up the
- page, and look for all subcategories and supercategories, and show them with
- a number adjacent to them. It will then automatically loop over all pages
- in the category. It will ask you to type the number of the appropriate
- replacement, and perform the change robotically.
+ page, and look for all subcategories and supercategories, and show them
+ with a number adjacent to them. It will then automatically loop over all
+ pages in the category. It will ask you to type the number of the
+ appropriate replacement, and perform the change robotically.
- If you don't want to move the article to a subcategory or supercategory, but
- to another category, you can use the 'j' (jump) command.
+ If you don't want to move the article to a subcategory or supercategory,
+ but to another category, you can use the 'j' (jump) command.
Typing 's' will leave the complete page unchanged.
@@ -928,7 +935,7 @@
def output_range(self, start, end):
pywikibot.output('\n' + full_text[:end] + '\n')
- # if categories possibly weren't visible, show them additionally
+ # if categories weren't visible, show them additionally
# (maybe this should always be shown?)
if len(self.text) > end:
pywikibot.output('')
@@ -984,7 +991,8 @@
StandardOption('skip this article', 's'),
StandardOption('remove this category tag', 'r'),
context_option,
- StandardOption('save category as "{0}"'.format(current_cat.title()), 'c'))
+ StandardOption('save category as "{0}"'
+ .format(current_cat.title()), 'c'))
choice = pywikibot.input_choice(color_format(
'Choice for page {lightpurple}{0}{default}:\n',
article.title()), options, default='c')
@@ -1106,11 +1114,6 @@
After string was generated by treeview it is either printed to the
console or saved it to a file.
-
- Parameters:
- * catTitle - the title of the category which will be the tree's root
- * maxDepth - the limit beyond which no subcategories will be listed
-
"""
cat = pywikibot.Category(self.site, self.catTitle)
pywikibot.output('Generating tree...', newline=False)
@@ -1295,8 +1298,8 @@
catTitle = pywikibot.input(
u'For which category do you want to create a tree view?')
filename = pywikibot.input(
- u'Please enter the name of the file where the tree should be saved,'
- u'\nor press enter to simply show the tree:')
+ 'Please enter the name of the file where the tree should be saved,'
+ '\nor press enter to simply show the tree:')
bot = CategoryTreeRobot(catTitle, catDB, filename, depth)
elif action == 'listify':
if not fromGiven:
--
To view, visit https://gerrit.wikimedia.org/r/406405
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings
Gerrit-MessageType: merged
Gerrit-Change-Id: I1c2755c7c41ddfb9300cfcebf9c8ad58133647af
Gerrit-PatchSet: 5
Gerrit-Project: pywikibot/core
Gerrit-Branch: master
Gerrit-Owner: Xqt <info(a)gno.de>
Gerrit-Reviewer: Dalba <dalba.wiki(a)gmail.com>
Gerrit-Reviewer: Framawiki <framawiki(a)tools.wmflabs.org>
Gerrit-Reviewer: John Vandenberg <jayvdb(a)gmail.com>
Gerrit-Reviewer: Xqt <info(a)gno.de>
Gerrit-Reviewer: Zoranzoki21 <zorandori4444(a)gmail.com>
Gerrit-Reviewer: jenkins-bot <>
jenkins-bot has submitted this change and it was merged. ( https://gerrit.wikimedia.org/r/407440 )
Change subject: [bugfix] fix super class call
......................................................................
[bugfix] fix super class call
Bug: T186220
Change-Id: I62f683b37e1731a2930f68c1d96bbf519305433f
---
M tests/site_tests.py
1 file changed, 1 insertion(+), 1 deletion(-)
Approvals:
Dalba: Looks good to me, approved
jenkins-bot: Verified
Zoranzoki21: Looks good to me, but someone else must approve
diff --git a/tests/site_tests.py b/tests/site_tests.py
index e6d95f2..f2dd8ef 100644
--- a/tests/site_tests.py
+++ b/tests/site_tests.py
@@ -1071,7 +1071,7 @@
def setUp(self):
"""Skip tests if Linter extension is missing."""
- super(TestLinterPages, self).setUpClass()
+ super(TestLinterPages, self).setUp()
if not self.site.has_extension('Linter'):
raise unittest.SkipTest(
'The site {0} does not use Linter extension'.format(self.site))
--
To view, visit https://gerrit.wikimedia.org/r/407440
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings
Gerrit-MessageType: merged
Gerrit-Change-Id: I62f683b37e1731a2930f68c1d96bbf519305433f
Gerrit-PatchSet: 1
Gerrit-Project: pywikibot/core
Gerrit-Branch: master
Gerrit-Owner: Xqt <info(a)gno.de>
Gerrit-Reviewer: Dalba <dalba.wiki(a)gmail.com>
Gerrit-Reviewer: Zoranzoki21 <zorandori4444(a)gmail.com>
Gerrit-Reviewer: jenkins-bot <>