If the challenge is downloading large files, you can also get local access to all of the dumps (wikidata, wikipedia, and more) through the
PAWS (Wikimedia-hosted Jupyter notebooks) and
Toolforge (more general-purpose Wikimedia hosting environment). From Toolforge, you could run the Wikidata toolkit (Java) that Denny mentions. I'm personally more familiar with Python, so my suggestion is to use Python code to filter down the dumps to what you desire. Below is an example Python notebook that will do this on PAWS, though the PAWS environment is not set up for these longer running jobs and will probably die before the process is complete, so I'd highly recommend converting it into a script that can run on Toolforge (see
https://wikitech.wikimedia.org/wiki/Help:Toolforge/Dumps).