On Jun 26, 2023, at 4:17 PM, Merlijn van Deen (valhallasw) valhallasw@arctus.nl wrote:
, there is a massive value in tests that test the integration with Wikimedia wikis: it's great if mocked API calls work, but if it breaks against the real API, Pywikibot is not doing its job for the user. Unfortunately that does mean the contents of the wiki now are a dependency for the test as well.
I certainly agree that integration tests are useful. Even essential. But, there's two different things here. One is talking to another piece of software over a network connection vs using a mocked test fixture. Mocks are great for some things, but utterly fail at demonstrating that you actually understand the API you're talking to (which is so often the case with a complex API like MediaWiki).
The orthogonal problem is depending on whatever data happens to be in the database of the thing you're talking to vs setting up the test conditions when you run the test. That's the bigger issue here. It doesn't matter if you're talking unit tests, integration tests, or anything else, If the outcome of your test depends on whatever some random person on the internet happened to do yesterday, it's hopeless.
Overall, I suspect the right path forward is spinning up a live server in a docker container combined with each test instantiate the required content as part of the test setup (and cleaning it up during teardown). Unfortunately, my understanding of docker is rudimentary at best, so engineering such an environment is beyond me.