jenkins-bot has submitted this change and it was merged.
Change subject: Pagegenerators: ns handling for titleregex option
......................................................................
Pagegenerators: ns handling for titleregex option
Workaround for Bug T85389, a previoulsy done for -newpages.
If namespace is given before -titleregex, it can be handled.
With argparse support, it will be possible to completely solve the bug.
Bug: T57226
Change-Id: I69bf8b9782b97426c0b67ef97b3892f152fe0fb1
---
M pywikibot/pagegenerators.py
M tests/pagegenerators_tests.py
2 files changed, 37 insertions(+), 9 deletions(-)
Approvals:
John Vandenberg: Looks good to me, approved
jenkins-bot: Verified
diff --git a/pywikibot/pagegenerators.py b/pywikibot/pagegenerators.py
index 601f809..0053ce6 100644
--- a/pywikibot/pagegenerators.py
+++ b/pywikibot/pagegenerators.py
@@ -116,6 +116,8 @@
before -newpages.
If used with -recentchanges, efficiency is improved if
-namepace/ns is provided before -recentchanges.
+ If used with -titleregex, -namepace/ns must be provided
+ before -titleregex and shall contain only one value.
-interwiki Work on the given page and all equivalent pages in other
languages. This can, for example, be used to fight
@@ -698,7 +700,15 @@
regex = pywikibot.input(u'What page names are you looking for?')
else:
regex = arg[12:]
- gen = RegexFilterPageGenerator(self.site.allpages(), regex)
+ # partial workaround for bug T85389
+ # to use -namespace/ns with -newpages, -ns must be given
+ # before -titleregex, otherwise default namespace is 0.
+ # allpages only accepts a single namespace, and will raise a
+ # TypeError if self.namespaces contains more than one namespace.
+ namespaces = self.namespaces or 0
+ gen = RegexFilterPageGenerator(
+ self.site.allpages(namespace=namespaces),
+ regex)
elif arg.startswith('-grep'):
if len(arg) == 5:
self.articlefilter_list.append(pywikibot.input(
diff --git a/tests/pagegenerators_tests.py b/tests/pagegenerators_tests.py
index b6fc34d..25e0875 100755
--- a/tests/pagegenerators_tests.py
+++ b/tests/pagegenerators_tests.py
@@ -548,21 +548,39 @@
self.assertIsInstance(page, pywikibot.Page)
self.assertRegex(page.title().lower(), '(.)\\1+')
- def test_regexfilter_ns(self):
- raise unittest.SkipTest('This test takes over 10 minutes due to T85389')
+ def test_regexfilter_ns_after(self):
+ """Bug: T85389: -ns after -titleregex is ignored with a warning."""
gf = pagegenerators.GeneratorFactory()
self.assertTrue(gf.handleArg('-titleregex:.*'))
- gf.handleArg('-limit:10')
gf.handleArg('-ns:1')
+ gf.handleArg('-limit:10')
gen = gf.getCombinedGenerator()
- # The code below takes due to bug T85389
pages = list(gen)
- # TODO: Fix RegexFilterPageGenerator to handle namespaces other than 0
- # Bug: T85389
- # Below should fail
self.assertGreater(len(pages), 0)
self.assertLessEqual(len(pages), 10)
- self.assertPagesInNamespaces(gen, 1)
+ self.assertPagesInNamespaces(pages, 0)
+
+ def test_regexfilter_ns_first(self):
+ gf = pagegenerators.GeneratorFactory()
+ # Workaround for Bug: T85389
+ # Give -ns before -titleregex (as for -newpages)
+ gf.handleArg('-ns:1')
+ self.assertTrue(gf.handleArg('-titleregex:.*'))
+ gf.handleArg('-limit:10')
+ gen = gf.getCombinedGenerator()
+ pages = list(gen)
+ self.assertGreater(len(pages), 0)
+ self.assertLessEqual(len(pages), 10)
+ self.assertPagesInNamespaces(pages, 1)
+
+ def test_regexfilter_two_ns_first(self):
+ gf = pagegenerators.GeneratorFactory()
+ gf.handleArg('-ns:3,1')
+ self.assertRaisesRegex(
+ TypeError,
+ 'allpages module does not support multiple namespaces',
+ gf.handleArg,
+ '-titleregex:.*')
def test_prefixing_default(self):
gf = pagegenerators.GeneratorFactory()
--
To view, visit https://gerrit.wikimedia.org/r/181993
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings
Gerrit-MessageType: merged
Gerrit-Change-Id: I69bf8b9782b97426c0b67ef97b3892f152fe0fb1
Gerrit-PatchSet: 8
Gerrit-Project: pywikibot/core
Gerrit-Branch: master
Gerrit-Owner: Mpaa <mpaa.wiki(a)gmail.com>
Gerrit-Reviewer: John Vandenberg <jayvdb(a)gmail.com>
Gerrit-Reviewer: Ladsgroup <ladsgroup(a)gmail.com>
Gerrit-Reviewer: Mpaa <mpaa.wiki(a)gmail.com>
Gerrit-Reviewer: jenkins-bot <>
jenkins-bot has submitted this change and it was merged.
Change subject: [FIX] reflinks: Use BytesIO for binary data
......................................................................
[FIX] reflinks: Use BytesIO for binary data
As compressed data is binary, a BytesIO object needs to be used to
handle the data. It was in I79703aa4d3a3d1df5cea546cb18152305c3b0cd4
replaced with io.StringIO which only works on strings (or unicode in
Python 2). Before dd558f287f6e786a41a881b0956e43130260a385 it was using
StringIO.StringIO which works on bytes and strings but with that change
io.StringIO was using in Python 3.
Bug: T86462
Change-Id: Ib3ff54242b25578a7387b97e4c6a10b6d21268ea
---
M scripts/reflinks.py
1 file changed, 2 insertions(+), 2 deletions(-)
Approvals:
John Vandenberg: Looks good to me, approved
jenkins-bot: Verified
diff --git a/scripts/reflinks.py b/scripts/reflinks.py
index d241476..415d94d 100644
--- a/scripts/reflinks.py
+++ b/scripts/reflinks.py
@@ -601,8 +601,8 @@
# XXX: small issue here: the whole page is downloaded
# through f.read(). It might fetch big files/pages.
# However, truncating an encoded gzipped stream is not
- # an option, for unzipping will fail.
- compressed = io.StringIO(f.read())
+ # an option, or unzipping will fail.
+ compressed = io.BytesIO(f.read())
f = gzip.GzipFile(fileobj=compressed)
# Read the first 1,000,000 bytes (0.95 MB)
--
To view, visit https://gerrit.wikimedia.org/r/186950
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings
Gerrit-MessageType: merged
Gerrit-Change-Id: Ib3ff54242b25578a7387b97e4c6a10b6d21268ea
Gerrit-PatchSet: 1
Gerrit-Project: pywikibot/core
Gerrit-Branch: master
Gerrit-Owner: XZise <CommodoreFabianus(a)gmx.de>
Gerrit-Reviewer: John Vandenberg <jayvdb(a)gmail.com>
Gerrit-Reviewer: jenkins-bot <>
jenkins-bot has submitted this change and it was merged.
Change subject: [IMPROV] Reduce maximum line length to 130
......................................................................
[IMPROV] Reduce maximum line length to 130
Change-Id: Iaf41e959fd03040b6428f061f2ea4a1586cf3c32
---
M pywikibot/families/wikibooks_family.py
M pywikibot/families/wikiquote_family.py
M pywikibot/fixes.py
M pywikibot/site.py
M scripts/commonscat.py
M scripts/coordinate_import.py
M scripts/fixing_redirects.py
M scripts/harvest_template.py
M scripts/imagerecat.py
M scripts/login.py
M scripts/solve_disambiguation.py
M tests/deprecation_tests.py
M tests/ipregex_tests.py
M tests/textlib_tests.py
M tox.ini
15 files changed, 95 insertions(+), 50 deletions(-)
Approvals:
John Vandenberg: Looks good to me, approved
jenkins-bot: Verified
diff --git a/pywikibot/families/wikibooks_family.py b/pywikibot/families/wikibooks_family.py
index eccb20f..baf844c 100644
--- a/pywikibot/families/wikibooks_family.py
+++ b/pywikibot/families/wikibooks_family.py
@@ -72,7 +72,7 @@
'jp': 'ja',
'kn': None, # https://bugzilla.wikimedia.org/show_bug.cgi?id=20325
'ks': None, # https://meta.wikimedia.org/wiki/Proposals_for_closing_projects/Closure_of_K…
- 'lb': None, # https://meta.wikimedia.org/wiki/Proposals_for_closing_projects/Closure_of_L…
+ 'lb': None, # noqa https://meta.wikimedia.org/wiki/Proposals_for_closing_projects/Closure_of_L…
'ln': None, # https://meta.wikimedia.org/wiki/Proposals_for_closing_projects/Closure_of_L…
'lv': None, # https://meta.wikimedia.org/wiki/Proposals_for_closing_projects/Closure_of_L…
'mi': None, # https://meta.wikimedia.org/wiki/Proposals_for_closing_projects/Closure_of_M…
@@ -82,12 +82,12 @@
'na': None, # https://meta.wikimedia.org/wiki/Proposals_for_closing_projects/Closure_of_N…
'nah': None, # https://meta.wikimedia.org/wiki/Proposals_for_closing_projects/Closure_of_N…
'nb': 'no',
- 'nds': None, # https://meta.wikimedia.org/wiki/Proposals_for_closing_projects/Closure_of_P…
+ 'nds': None, # noqa https://meta.wikimedia.org/wiki/Proposals_for_closing_projects/Closure_of_P…
'ps': None, # https://meta.wikimedia.org/wiki/Proposals_for_closing_projects/Closure_of_P…
'qu': None, # https://meta.wikimedia.org/wiki/Proposals_for_closing_projects/Closure_of_Q…
'rm': None, # https://meta.wikimedia.org/wiki/Proposals_for_closing_projects/Closure_of_R…
'se': None, # https://meta.wikimedia.org/wiki/Proposals_for_closing_projects/Closure_of_S…
- 'simple': None, # https://meta.wikimedia.org/wiki/Proposals_for_closing_projects/Closure_of_S…
+ 'simple': None, # noqa https://meta.wikimedia.org/wiki/Proposals_for_closing_projects/Closure_of_S…
'su': None, # https://meta.wikimedia.org/wiki/Proposals_for_closing_projects/Closure_of_B…
'sw': None, # https://bugzilla.wikimedia.org/show_bug.cgi?id=25170
'tk': None,
diff --git a/pywikibot/families/wikiquote_family.py b/pywikibot/families/wikiquote_family.py
index 3c91152..52dfffb 100644
--- a/pywikibot/families/wikiquote_family.py
+++ b/pywikibot/families/wikiquote_family.py
@@ -76,7 +76,7 @@
'nb': 'no',
'nds': None, # https://meta.wikimedia.org/wiki/Proposals_for_closing_projects/Closure_of_L…
'qu': None, # https://meta.wikimedia.org/wiki/Proposals_for_closing_projects/Closure_of_Q…
- 'simple': None, # https://meta.wikimedia.org/wiki/Proposals_for_closing_projects/Closure_of_S…
+ 'simple': None, # noqa https://meta.wikimedia.org/wiki/Proposals_for_closing_projects/Closure_of_S…
'tk': None, # https://meta.wikimedia.org/wiki/Proposals_for_closing_projects/Closure_of_T…
'tokipona': None,
'tt': None, # https://meta.wikimedia.org/wiki/Proposals_for_closing_projects/Closure_of_T…
diff --git a/pywikibot/fixes.py b/pywikibot/fixes.py
index a045db3..3261004 100644
--- a/pywikibot/fixes.py
+++ b/pywikibot/fixes.py
@@ -109,7 +109,7 @@
'de': u'Bot: korrigiere Grammatik',
},
'replacements': [
- # (u'([Ss]owohl) ([^,\.]+?), als auch', r'\1 \2 als auch'),
+ # (u'([Ss]owohl) ([^,\.]+?), als auch', r'\1 \2 als auch'),
# (u'([Ww]eder) ([^,\.]+?), noch', r'\1 \2 noch'),
#
# Vorsicht bei Substantiven, z. B. 3-Jähriger!
@@ -170,7 +170,9 @@
r'/\w(,\w)*/', # Laut-Aufzählung in der Linguistik
r'[xyz](,[xyz])+', # Variablen in der Mathematik (unklar, ob Leerzeichen hier Pflicht sind)
r'(?m)^;(.*?)$', # Definitionslisten, dort gibt es oft absichtlich Leerzeichen vor Doppelpunkten
- r'\d+h( | )\d+m', # Schreibweise für Zeiten, vor allem in Film-Infoboxen. Nicht korrekt, aber dafür schön kurz.
+ r'\d+h( | )\d+m',
+ # Schreibweise für Zeiten, vor allem in Film-Infoboxen.
+ # Nicht korrekt, aber dafür schön kurz.
r'(?i)\[\[(Bild|Image|Media):.+?\|', # Dateinamen auslassen
r'{{bgc\|.*?}}', # Hintergrundfarbe
r'<sup>\d+m</sup>', # bei chemischen Formeln
@@ -301,7 +303,8 @@
# dash in external link, where the correct end of the URL can
# be detected from the file extension. It is very unlikely that
# this will cause mistakes.
- (r'\[(?P<url>https?://[^\|\] ]+?(\.pdf|\.html|\.htm|\.php|\.asp|\.aspx|\.jsp)) *\| *(?P<label>[^\|\]]+?)\]', r'[\g<url> \g<label>]'),
+ (r'\[(?P<url>https?://[^\|\] ]+?(\.pdf|\.html|\.htm|\.php|\.asp|\.aspx|\.jsp)) *\|'
+ r' *(?P<label>[^\|\]]+?)\]', r'[\g<url> \g<label>]'),
],
'exceptions': {
'inside-tags': [
@@ -401,13 +404,17 @@
# space after death sign w/ linked date
# (u'†\[\[(\d)', u'† [[\\1'),
# (u'†\[\[(\d)', u'† [[\\1'),
- (u'\[\[(\d+\. (?:Januar|Februar|März|April|Mai|Juni|Juli|August|September|Oktober|November|Dezember)) (\d{1,4})\]\]', u'[[\\1]] [[\\2]]'),
+ (u'\[\[(\d+\. (?:Januar|Februar|März|April|Mai|Juni|Juli|August|'
+ u'September|Oktober|November|Dezember)) (\d{1,4})\]\]', u'[[\\1]] [[\\2]]'),
# Keine führende Null beim Datum (ersteinmal nur bei denen, bei denen auch ein Leerzeichen fehlt)
- (u'0(\d+)\.(Januar|Februar|März|April|Mai|Juni|Juli|August|September|Oktober|November|Dezember)', r'\1. \2'),
+ (u'0(\d+)\.(Januar|Februar|März|April|Mai|Juni|Juli|August|'
+ u'September|Oktober|November|Dezember)', r'\1. \2'),
# Kein Leerzeichen zwischen Tag und Monat
- (u'(\d+)\.(Januar|Februar|März|April|Mai|Juni|Juli|August|September|Oktober|November|Dezember)', r'\1. \2'),
+ (u'(\d+)\.(Januar|Februar|März|April|Mai|Juni|Juli|August|'
+ u'September|Oktober|November|Dezember)', r'\1. \2'),
# Kein Punkt vorm Jahr
- (u'(\d+)\. (Januar|Februar|März|April|Mai|Juni|Juli|August|September|Oktober|November|Dezember)\.(\d{1,4})', r'\1. \2 \3'),
+ (u'(\d+)\. (Januar|Februar|März|April|Mai|Juni|Juli|August|'
+ u'September|Oktober|November|Dezember)\.(\d{1,4})', r'\1. \2 \3'),
],
'exceptions': {
'inside': [
diff --git a/pywikibot/site.py b/pywikibot/site.py
index 873cc6b..913bfc2 100644
--- a/pywikibot/site.py
+++ b/pywikibot/site.py
@@ -3956,12 +3956,16 @@
"noapiwrite": "API editing not enabled on %(site)s wiki",
"writeapidenied": "User %(user)s is not authorized to edit on %(site)s wiki",
"cantcreate": "User %(user)s not authorized to create new pages on %(site)s wiki",
- "cantcreate-anon": """Bot is not logged in, and anon users are not authorized to create new pages on %(site)s wiki""",
- "noimageredirect-anon": """Bot is not logged in, and anon users are not authorized to create image redirects on %(site)s wiki""",
+ "cantcreate-anon":
+ "Bot is not logged in, and anon users are not authorized to create "
+ "new pages on %(site)s wiki",
+ "noimageredirect-anon":
+ "Bot is not logged in, and anon users are not authorized to create "
+ "image redirects on %(site)s wiki",
"noimageredirect": "User %(user)s not authorized to create image redirects on %(site)s wiki",
"filtered": "%(info)s",
"contenttoobig": "%(info)s",
- "noedit-anon": """Bot is not logged in, and anon users are not authorized to edit on %(site)s wiki""",
+ "noedit-anon": "Bot is not logged in, and anon users are not authorized to edit on %(site)s wiki",
"noedit": "User %(user)s not authorized to edit pages on %(site)s wiki",
"missingtitle": NoCreateError,
diff --git a/scripts/commonscat.py b/scripts/commonscat.py
index cc9fb19..adabbe4 100755
--- a/scripts/commonscat.py
+++ b/scripts/commonscat.py
@@ -456,7 +456,9 @@
loguser = logitem.user()
logcomment = logitem.comment()
# Some logic to extract the target page.
- regex = u'moved to \[\[\:?Category:(?P<newcat1>[^\|\}]+)(\|[^\}]+)?\]\]|Robot: Changing Category:(.+) to Category:(?P<newcat2>.+)'
+ regex = (
+ r'moved to \[\[\:?Category:(?P<newcat1>[^\|\}]+)(\|[^\}]+)?\]\]|'
+ r'Robot: Changing Category:(.+) to Category:(?P<newcat2>.+)')
m = re.search(regex, logcomment, flags=re.I)
if m:
if m.group('newcat1'):
diff --git a/scripts/coordinate_import.py b/scripts/coordinate_import.py
index f4f680e..6074ba3 100644
--- a/scripts/coordinate_import.py
+++ b/scripts/coordinate_import.py
@@ -7,10 +7,12 @@
python coordinate_import.py -lang:en -family:wikipedia -cat:Category:Coordinates_not_on_Wikidata
-This will work on all pages in the category "coordinates not on Wikidata" and will import the coordinates on these pages to Wikidata.
+This will work on all pages in the category "coordinates not on Wikidata" and
+will import the coordinates on these pages to Wikidata.
-The data from the "GeoData" extension (https://www.mediawiki.org/wiki/Extension:GeoData) is used so that extension has to be setup properly.
-You can look at the [[Special:Nearby]] page on your local Wiki to see if it's populated.
+The data from the "GeoData" extension (https://www.mediawiki.org/wiki/Extension:GeoData)
+is used so that extension has to be setup properly. You can look at the
+[[Special:Nearby]] page on your local Wiki to see if it's populated.
You can use any typical pagegenerator to provide with a list of pages:
diff --git a/scripts/fixing_redirects.py b/scripts/fixing_redirects.py
index 10de73d..c8a8a51 100644
--- a/scripts/fixing_redirects.py
+++ b/scripts/fixing_redirects.py
@@ -66,7 +66,8 @@
linktrail = mysite.linktrail()
# make a backup of the original text so we can show the changes later
- linkR = re.compile(r'\[\[(?P<title>[^\]\|#]*)(?P<section>#[^\]\|]*)?(\|(?P<label>[^\]]*))?\]\](?P<linktrail>' + linktrail + ')')
+ linkR = re.compile(r'\[\[(?P<title>[^\]\|#]*)(?P<section>#[^\]\|]*)?'
+ r'(\|(?P<label>[^\]]*))?\]\](?P<linktrail>' + linktrail + ')')
curpos = 0
# This loop will run until we have finished the current page
while True:
diff --git a/scripts/harvest_template.py b/scripts/harvest_template.py
index 102931d..d5e0d64 100755
--- a/scripts/harvest_template.py
+++ b/scripts/harvest_template.py
@@ -5,11 +5,8 @@
Usage:
-python harvest_template.py -transcludes:"..." template_parameter PID [template_parameter PID]
-
- or
-
-python harvest_template.py [generators] -template:"..." template_parameter PID [template_parameter PID]
+* harvest_template.py -transcludes:"..." template_parameter PID [template_parameter PID]
+* harvest_template.py [generators] -template:"..." template_parameter PID [template_parameter PID]
This will work on all pages that transclude the template in the article
namespace
@@ -20,7 +17,7 @@
Examples:
-python harvest_template.py -lang:nl -cat:Sisoridae -template:"Taxobox straalvinnige" -namespace:0 orde P70 familie P71 geslacht P74
+* harvest_template.py -lang:nl -cat:Sisoridae -template:"Taxobox straalvinnige" -namespace:0 orde P70 familie P71 geslacht P74
"""
#
@@ -147,7 +144,9 @@
# Try to extract a valid page
match = re.search(pywikibot.link_regex, value)
if not match:
- pywikibot.output(u'%s field %s value %s isnt a wikilink. Skipping' % (claim.getID(), field, value))
+ pywikibot.output(
+ u'%s field %s value %s isnt a wikilink. Skipping'
+ % (claim.getID(), field, value))
continue
link_text = match.group(1)
diff --git a/scripts/imagerecat.py b/scripts/imagerecat.py
index 70e7ccd..7bf8f1d 100644
--- a/scripts/imagerecat.py
+++ b/scripts/imagerecat.py
@@ -146,7 +146,12 @@
# Cant handle other sites atm
return [], [], []
- commonsenseRe = re.compile('^#COMMONSENSE(.*)#USAGE(\s)+\((?P<usagenum>(\d)+)\)\s(?P<usage>(.*))\s#KEYWORDS(\s)+\((?P<keywords>(\d)+)\)(.*)#CATEGORIES(\s)+\((?P<catnum>(\d)+)\)\s(?P<cats>(.*))\s#GALLERIES(\s)+\((?P<galnum>(\d)+)\)\s(?P<gals>(.*))\s(.*)#EOF$', re.MULTILINE + re.DOTALL) # noqa
+ commonsenseRe = re.compile(
+ '^#COMMONSENSE(.*)#USAGE(\s)+\((?P<usagenum>(\d)+)\)\s(?P<usage>(.*))\s'
+ '#KEYWORDS(\s)+\((?P<keywords>(\d)+)\)(.*)'
+ '#CATEGORIES(\s)+\((?P<catnum>(\d)+)\)\s(?P<cats>(.*))\s'
+ '#GALLERIES(\s)+\((?P<galnum>(\d)+)\)\s(?P<gals>(.*))\s(.*)#EOF$',
+ re.MULTILINE + re.DOTALL)
gotInfo = False
matches = None
diff --git a/scripts/login.py b/scripts/login.py
index 71cdbb2..8d52556 100755
--- a/scripts/login.py
+++ b/scripts/login.py
@@ -86,7 +86,8 @@
elif arg == "-all":
logall = True
elif arg == "-force":
- pywikibot.output(u"To force a re-login, please delete the revelant lines from '%s' (or the entire file) and try again." %
+ pywikibot.output(u"To force a re-login, please delete the revelant "
+ u"lines from '%s' (or the entire file) and try again." %
join(config.base_dir, 'pywikibot.lwp'))
elif arg == "-logout":
logout = True
diff --git a/scripts/solve_disambiguation.py b/scripts/solve_disambiguation.py
index 83d1050..660a7ba 100644
--- a/scripts/solve_disambiguation.py
+++ b/scripts/solve_disambiguation.py
@@ -972,14 +972,22 @@
{'from': disambPage.title()}
)
else:
- self.comment = i18n.twtranslate(self.mysite, 'solve_disambiguation-redirect-resolved', {'from': disambPage.title(), 'to': targets})
+ self.comment = i18n.twtranslate(
+ self.mysite, 'solve_disambiguation-redirect-resolved',
+ {'from': disambPage.title(), 'to': targets})
else:
if unlink and not new_targets:
- self.comment = i18n.twtranslate(self.mysite, 'solve_disambiguation-links-removed', {'from': disambPage.title()})
+ self.comment = i18n.twtranslate(
+ self.mysite, 'solve_disambiguation-links-removed',
+ {'from': disambPage.title()})
elif dn and not new_targets:
- self.comment = i18n.twtranslate(self.mysite, 'solve_disambiguation-adding-dn-template', {'from': disambPage.title()})
+ self.comment = i18n.twtranslate(
+ self.mysite, 'solve_disambiguation-adding-dn-template',
+ {'from': disambPage.title()})
else:
- self.comment = i18n.twtranslate(self.mysite, 'solve_disambiguation-links-resolved', {'from': disambPage.title(), 'to': targets})
+ self.comment = i18n.twtranslate(
+ self.mysite, 'solve_disambiguation-links-resolved',
+ {'from': disambPage.title(), 'to': targets})
def run(self):
if self.main_only:
diff --git a/tests/deprecation_tests.py b/tests/deprecation_tests.py
index 2b13e38..b9a662e 100644
--- a/tests/deprecation_tests.py
+++ b/tests/deprecation_tests.py
@@ -628,27 +628,33 @@
rv = f.deprecated_instance_method_args(bah='b', bah2='c')
self.assertEqual(rv, ('b', 'c'))
self.assertDeprecation(
- 'bah argument of ' + __name__ + '.DeprecatedMethodClass.deprecated_instance_method_args is deprecated; use foo instead.')
+ 'bah argument of ' + __name__ + '.DeprecatedMethodClass.'
+ 'deprecated_instance_method_args is deprecated; use foo instead.')
self.assertDeprecation(
- 'bah2 argument of ' + __name__ + '.DeprecatedMethodClass.deprecated_instance_method_args is deprecated; use foo2 instead.')
+ 'bah2 argument of ' + __name__ + '.DeprecatedMethodClass.'
+ 'deprecated_instance_method_args is deprecated; use foo2 instead.')
DeprecatorTestCase._reset_messages()
rv = f.deprecated_instance_method_args(foo='b', bah2='c')
self.assertEqual(rv, ('b', 'c'))
self.assertNoDeprecation(
- 'bah argument of ' + __name__ + '.DeprecatedMethodClass.deprecated_instance_method_args is deprecated; use foo instead.')
+ 'bah argument of ' + __name__ + '.DeprecatedMethodClass.'
+ 'deprecated_instance_method_args is deprecated; use foo instead.')
self.assertDeprecation(
- 'bah2 argument of ' + __name__ + '.DeprecatedMethodClass.deprecated_instance_method_args is deprecated; use foo2 instead.')
+ 'bah2 argument of ' + __name__ + '.DeprecatedMethodClass.'
+ 'deprecated_instance_method_args is deprecated; use foo2 instead.')
DeprecatorTestCase._reset_messages()
rv = f.deprecated_instance_method_args(foo2='c', bah='b')
self.assertEqual(rv, ('b', 'c'))
self.assertDeprecation(
- 'bah argument of ' + __name__ + '.DeprecatedMethodClass.deprecated_instance_method_args is deprecated; use foo instead.')
+ 'bah argument of ' + __name__ + '.DeprecatedMethodClass.'
+ 'deprecated_instance_method_args is deprecated; use foo instead.')
self.assertNoDeprecation(
- 'bah2 argument of ' + __name__ + '.DeprecatedMethodClass.deprecated_instance_method_args is deprecated; use foo2 instead.')
+ 'bah2 argument of ' + __name__ + '.DeprecatedMethodClass.'
+ 'deprecated_instance_method_args is deprecated; use foo2 instead.')
DeprecatorTestCase._reset_messages()
@@ -666,7 +672,8 @@
self.assertDeprecation(
__name__ + '.DeprecatedMethodClass.deprecated_instance_method_and_arg is deprecated.')
self.assertNoDeprecation(
- 'bah argument of ' + __name__ + '.DeprecatedMethodClass.deprecated_instance_method_and_arg is deprecated; use foo instead.')
+ 'bah argument of ' + __name__ + '.DeprecatedMethodClass.'
+ 'deprecated_instance_method_and_arg is deprecated; use foo instead.')
DeprecatorTestCase._reset_messages()
@@ -676,7 +683,8 @@
self.assertDeprecation(
__name__ + '.DeprecatedMethodClass.deprecated_instance_method_and_arg is deprecated.')
self.assertDeprecation(
- 'bah argument of ' + __name__ + '.DeprecatedMethodClass.deprecated_instance_method_and_arg is deprecated; use foo instead.')
+ 'bah argument of ' + __name__ + '.DeprecatedMethodClass.'
+ 'deprecated_instance_method_and_arg is deprecated; use foo instead.')
DeprecatorTestCase._reset_messages()
@@ -686,7 +694,8 @@
self.assertDeprecation(
__name__ + '.DeprecatedMethodClass.deprecated_instance_method_and_arg is deprecated.')
self.assertNoDeprecation(
- 'bah argument of ' + __name__ + '.DeprecatedMethodClass.deprecated_instance_method_and_arg is deprecated; use foo instead.')
+ 'bah argument of ' + __name__ + '.DeprecatedMethodClass.'
+ 'deprecated_instance_method_and_arg is deprecated; use foo instead.')
def test_deprecated_instance_method_and_arg2(self):
"""Test @deprecated and @deprecate_arg with instance methods."""
@@ -698,7 +707,8 @@
self.assertDeprecation(
__name__ + '.DeprecatedMethodClass.deprecated_instance_method_and_arg2 is deprecated.')
self.assertNoDeprecation(
- 'bah argument of ' + __name__ + '.DeprecatedMethodClass.deprecated_instance_method_and_arg2 is deprecated; use foo instead.')
+ 'bah argument of ' + __name__ + '.DeprecatedMethodClass.'
+ 'deprecated_instance_method_and_arg2 is deprecated; use foo instead.')
DeprecatorTestCase._reset_messages()
@@ -708,7 +718,8 @@
self.assertDeprecation(
__name__ + '.DeprecatedMethodClass.deprecated_instance_method_and_arg2 is deprecated.')
self.assertDeprecation(
- 'bah argument of ' + __name__ + '.DeprecatedMethodClass.deprecated_instance_method_and_arg2 is deprecated; use foo instead.')
+ 'bah argument of ' + __name__ + '.DeprecatedMethodClass.'
+ 'deprecated_instance_method_and_arg2 is deprecated; use foo instead.')
DeprecatorTestCase._reset_messages()
@@ -718,7 +729,8 @@
self.assertDeprecation(
__name__ + '.DeprecatedMethodClass.deprecated_instance_method_and_arg2 is deprecated.')
self.assertNoDeprecation(
- 'bah argument of ' + __name__ + '.DeprecatedMethodClass.deprecated_instance_method_and_arg2 is deprecated; use foo instead.')
+ 'bah argument of ' + __name__ + '.DeprecatedMethodClass.'
+ 'deprecated_instance_method_and_arg2 is deprecated; use foo instead.')
if __name__ == '__main__':
diff --git a/tests/ipregex_tests.py b/tests/ipregex_tests.py
index cf61a19..75fe6a2 100644
--- a/tests/ipregex_tests.py
+++ b/tests/ipregex_tests.py
@@ -197,8 +197,10 @@
self.ipv6test(False, "1.2.3.4::")
# Testing IPv4 addresses represented as dotted-quads
- # Leading zero's in IPv4 addresses not allowed: some systems treat the leading "0" in ".086" as the start of an octal number
- # Update: The BNF in RFC-3986 explicitly defines the dec-octet (for IPv4 addresses) not to have a leading zero
+ # Leading zero's in IPv4 addresses not allowed: some systems treat the
+ # leading "0" in ".086" as the start of an octal number
+ # Update: The BNF in RFC-3986 explicitly defines the dec-octet
+ # (for IPv4 addresses) not to have a leading zero
self.ipv6test(False, "fe80:0000:0000:0000:0204:61ff:254.157.241.086")
self.ipv6test(True, "::ffff:192.0.2.128") # but this is OK, since there's a single digit
self.ipv6test(False, "XXXX:XXXX:XXXX:XXXX:XXXX:XXXX:1.2.3.4")
diff --git a/tests/textlib_tests.py b/tests/textlib_tests.py
index 630ced4..2ade780 100644
--- a/tests/textlib_tests.py
+++ b/tests/textlib_tests.py
@@ -209,7 +209,8 @@
self.assertEqual(func('{{ a }}'), [('a', OrderedDict())])
self.assertEqual(func('{{a|b=c}}'), [('a', OrderedDict((('b', 'c'), )))])
self.assertEqual(func('{{a|b|c=d}}'), [('a', OrderedDict((('1', 'b'), ('c', 'd'))))])
- self.assertEqual(func('{{a|b=c|f=g|d=e|1=}}'), [('a', OrderedDict((('b', 'c'), ('f', 'g'), ('d', 'e'), ('1', ''))))])
+ self.assertEqual(func('{{a|b=c|f=g|d=e|1=}}'),
+ [('a', OrderedDict((('b', 'c'), ('f', 'g'), ('d', 'e'), ('1', ''))))])
self.assertEqual(func('{{a|1=2|c=d}}'), [('a', OrderedDict((('1', '2'), ('c', 'd'))))])
self.assertEqual(func('{{a|c=d|1=2}}'), [('a', OrderedDict((('c', 'd'), ('1', '2'))))])
self.assertEqual(func('{{a|5=d|a=b}}'), [('a', OrderedDict((('5', 'd'), ('a', 'b'))))])
@@ -217,7 +218,8 @@
self.assertEqual(func('{{a|=|}}'), [('a', OrderedDict((('', ''), ('1', ''))))])
self.assertEqual(func('{{a||}}'), [('a', OrderedDict((('1', ''), ('2', ''))))])
self.assertEqual(func('{{a|b={{{1}}}}}'), [('a', OrderedDict((('b', '{{{1}}}'), )))])
- self.assertEqual(func('{{a|b=<noinclude>{{{1}}}</noinclude>}}'), [('a', OrderedDict((('b', '<noinclude>{{{1}}}</noinclude>'), )))])
+ self.assertEqual(func('{{a|b=<noinclude>{{{1}}}</noinclude>}}'),
+ [('a', OrderedDict((('b', '<noinclude>{{{1}}}</noinclude>'), )))])
self.assertEqual(func('{{subst:a|b=c}}'), [('subst:a', OrderedDict((('b', 'c'), )))])
self.assertEqual(func('{{safesubst:a|b=c}}'), [('safesubst:a', OrderedDict((('b', 'c'), )))])
self.assertEqual(func('{{msgnw:a|b=c}}'), [('msgnw:a', OrderedDict((('b', 'c'), )))])
diff --git a/tox.ini b/tox.ini
index 915cf18..b0e14ae 100644
--- a/tox.ini
+++ b/tox.ini
@@ -136,10 +136,10 @@
[flake8]
ignore = E122,E127,E241,E265
exclude = .tox,.git,./*.egg,ez_setup.py,build,externals,user-config.py,./scripts/i18n/*
-max_line_length = 150
+max_line_length = 130
[pep8]
ignore = E122,E127,E241
exclude = .tox,.git,./*.egg,ez_setup.py,build,externals,user-config.py,./scripts/i18n/*
-max_line_length = 150
+max_line_length = 130
--
To view, visit https://gerrit.wikimedia.org/r/186780
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings
Gerrit-MessageType: merged
Gerrit-Change-Id: Iaf41e959fd03040b6428f061f2ea4a1586cf3c32
Gerrit-PatchSet: 3
Gerrit-Project: pywikibot/core
Gerrit-Branch: master
Gerrit-Owner: XZise <CommodoreFabianus(a)gmx.de>
Gerrit-Reviewer: John Vandenberg <jayvdb(a)gmail.com>
Gerrit-Reviewer: XZise <CommodoreFabianus(a)gmx.de>
Gerrit-Reviewer: jenkins-bot <>
jenkins-bot has submitted this change and it was merged.
Change subject: [core] replace deprecated getNamespaceIndex() with ns_index()
......................................................................
[core] replace deprecated getNamespaceIndex() with ns_index()
- getNamespaceIndex() is marked as deprecated method.
- some doc cleanups
Change-Id: I96dc118224965696531011d3f3a84dc559784584
---
M scripts/templatecount.py
1 file changed, 7 insertions(+), 6 deletions(-)
Approvals:
XZise: Looks good to me, approved
jenkins-bot: Verified
diff --git a/scripts/templatecount.py b/scripts/templatecount.py
index ab70b97..036b012 100644
--- a/scripts/templatecount.py
+++ b/scripts/templatecount.py
@@ -21,13 +21,13 @@
Examples:
-Counts how many times {{ref}} and {{note}} are transcluded in articles.
+Counts how many times {{ref}} and {{note}} are transcluded in articles:
- python templatecount.py -count -namespace:0 ref note
+ templatecount.py -count -namespace:0 ref note
-Lists all the category pages that transclude {{cfd}} and {{cfdu}}.
+Lists all the category pages that transclude {{cfd}} and {{cfdu}}:
- python templatecount.py -list -namespace:14 cfd cfdu
+ templatecount.py -list -namespace:14 cfd cfdu
"""
#
@@ -97,14 +97,15 @@
mysite = pywikibot.Site()
# The names of the templates are the keys, and lists of pages
# transcluding templates are the values.
- mytpl = mysite.getNamespaceIndex(mysite.template_namespace())
+ mytpl = mysite.ns_index(mysite.template_namespace())
for template in templates:
transcludingArray = []
gen = pagegenerators.ReferringPageGenerator(
pywikibot.Page(mysite, template, ns=mytpl),
onlyTemplateInclusion=True)
if namespaces:
- gen = pagegenerators.NamespaceFilterPageGenerator(gen, namespaces)
+ gen = pagegenerators.NamespaceFilterPageGenerator(gen,
+ namespaces)
for page in gen:
transcludingArray.append(page)
yield template, transcludingArray
--
To view, visit https://gerrit.wikimedia.org/r/186640
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings
Gerrit-MessageType: merged
Gerrit-Change-Id: I96dc118224965696531011d3f3a84dc559784584
Gerrit-PatchSet: 1
Gerrit-Project: pywikibot/core
Gerrit-Branch: master
Gerrit-Owner: Xqt <info(a)gno.de>
Gerrit-Reviewer: XZise <CommodoreFabianus(a)gmx.de>
Gerrit-Reviewer: Xqt <info(a)gno.de>
Gerrit-Reviewer: jenkins-bot <>
jenkins-bot has submitted this change and it was merged.
Change subject: Improvement: Handle PageSaveRelatedError while putting sd template
......................................................................
Improvement: Handle PageSaveRelatedError while putting sd template
- When a speedy deletion template is to be written on a page,
handle PageSaveRelatedError and print that error.
- Print an additional message if the script cannot solve the
broken redirect or mark the page for deletion.
- Remove the blank line at the end of page processing.
Change-Id: I762db6a3e1c4b711dee009893fadab5ee7566ca4
---
M scripts/redirect.py
1 file changed, 6 insertions(+), 2 deletions(-)
Approvals:
XZise: Looks good to me, approved
jenkins-bot: Verified
diff --git a/scripts/redirect.py b/scripts/redirect.py
index 2fa2c83..6402c57 100755
--- a/scripts/redirect.py
+++ b/scripts/redirect.py
@@ -502,10 +502,15 @@
targetPage.site,
'redirect-broken-redirect-template'
) + "\n" + content
- redir_page.put(content, reason)
+ try:
+ redir_page.put(content, reason)
+ except pywikibot.PageSaveRelatedError as e:
+ pywikibot.error(e)
else:
pywikibot.output(
u'No speedy deletion template available')
+ else:
+ pywikibot.output(u'Cannot fix or delete the broken redirect')
except pywikibot.IsRedirectPage:
pywikibot.output(u"Redirect target %s is also a redirect! "
u"Won't delete anything."
@@ -516,7 +521,6 @@
pywikibot.output(
u'Redirect target %s does exist! Won\'t delete anything.'
% targetPage.title(asLink=True))
- pywikibot.output(u'')
def fix_double_redirects(self):
for redir_name in self.generator.retrieve_double_redirects():
--
To view, visit https://gerrit.wikimedia.org/r/173778
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings
Gerrit-MessageType: merged
Gerrit-Change-Id: I762db6a3e1c4b711dee009893fadab5ee7566ca4
Gerrit-PatchSet: 2
Gerrit-Project: pywikibot/core
Gerrit-Branch: master
Gerrit-Owner: Xqt <info(a)gno.de>
Gerrit-Reviewer: John Vandenberg <jayvdb(a)gmail.com>
Gerrit-Reviewer: Ladsgroup <ladsgroup(a)gmail.com>
Gerrit-Reviewer: Merlijn van Deen <valhallasw(a)arctus.nl>
Gerrit-Reviewer: Russell Blau <russblau(a)imapmail.org>
Gerrit-Reviewer: XZise <CommodoreFabianus(a)gmx.de>
Gerrit-Reviewer: Xqt <info(a)gno.de>
Gerrit-Reviewer: jenkins-bot <>
jenkins-bot has submitted this change and it was merged.
Change subject: QueryGen.set_namespace: no request param if empty
......................................................................
QueryGen.set_namespace: no request param if empty
If namespaces is None, [], or other empty iterator,
assume the caller is not wanting to restrict the
namespaces to 'no namespace', as that would prevent
any data from being returned.
Change-Id: I8224976fce0559c2663025d99dd86f55efe8e538
---
M pywikibot/data/api.py
1 file changed, 7 insertions(+), 2 deletions(-)
Approvals:
XZise: Looks good to me, approved
jenkins-bot: Verified
diff --git a/pywikibot/data/api.py b/pywikibot/data/api.py
index 5d3b2ff..5cdc466 100644
--- a/pywikibot/data/api.py
+++ b/pywikibot/data/api.py
@@ -1740,7 +1740,8 @@
@param namespaces: namespace identifiers to limit query results
@type namespaces: iterable of basestring or Namespace key,
or a single instance of those types. May be a '|' separated
- list of namespace identifiers.
+ list of namespace identifiers. An empty iterator clears any
+ namespace restriction.
@raises KeyError: a namespace identifier was not resolved
@raises TypeError: a namespace identifier has an inappropriate
type such as NoneType or bool, or more than one namespace
@@ -1760,11 +1761,15 @@
namespaces = [ns.id for ns in
pywikibot.site.Namespace.resolve(namespaces,
self.site.namespaces)]
+
if 'multi' not in param and len(namespaces) != 1:
raise TypeError(u'{0} module does not support multiple namespaces'
.format(self.limited_module))
- self.request[self.prefix + "namespace"] = namespaces
+ if namespaces:
+ self.request[self.prefix + 'namespace'] = namespaces
+ elif self.prefix + 'namespace' in self.request:
+ del self.request[self.prefix + 'namespace']
def _query_continue(self):
if all(key not in self.data[self.continue_name]
--
To view, visit https://gerrit.wikimedia.org/r/186734
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings
Gerrit-MessageType: merged
Gerrit-Change-Id: I8224976fce0559c2663025d99dd86f55efe8e538
Gerrit-PatchSet: 1
Gerrit-Project: pywikibot/core
Gerrit-Branch: master
Gerrit-Owner: John Vandenberg <jayvdb(a)gmail.com>
Gerrit-Reviewer: John Vandenberg <jayvdb(a)gmail.com>
Gerrit-Reviewer: XZise <CommodoreFabianus(a)gmx.de>
Gerrit-Reviewer: jenkins-bot <>