Actually it would, each new file introduces more disk read time, file open, file close time. Then depending on the language it may have to create a new json parser, parse the file, and then destroy the json parser. With 1-2 files its not that big of a deal, but as it scales up the issue becomes more and more of a bottle neck
On Wed, Apr 2, 2014 at 1:29 PM, bugzilla-daemon@wikimedia.org wrote:
https://bugzilla.wikimedia.org/show_bug.cgi?id=63327
--- Comment #2 from Niklas Laxström niklas.laxstrom@gmail.com --- It shouldn't matter too much whether N messages are in 50 or 1000 files (made up numbers) on the time how much it takes to parse them.
-- You are receiving this mail because: You are the assignee for the bug. _______________________________________________ Pywikipedia-bugs mailing list Pywikipedia-bugs@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/pywikipedia-bugs