Unpack the tarbar and look at the README.rst file. In a nutshell, all you should have to do is:
and then create a virtualenv using the binaries you just built as a base. If memory serves, the configure step should take a few minutes, and the make is on the order of an hour.
And is it safe to assume that if I build python from source and put it inside of a venv, when I submit to grid it'll use my python for the job?
I don't do much grid work, but, looking at my old jobs, it looks like I was running grid jobs with a venv:
$ cat jobs/get_socks.2019-12-16-01-51-12/job.bash
#!/bin/bash
source /home/roysmith/sock-classifier/env/bin/activate
/home/roysmith/sock-classifier/src/utils/get_socks.py \
--archive-dir=/home/roysmith/sock-classifier/data/archives-22618 \
--job-name=get_socks.2019-12-16-01-51-12 \
--log=/home/roysmith/sock-classifier/jobs/get_socks.2019-12-16-01-51-12/get_socks.log
which seemed to work fine. That particular job was done with the stock Python binary, but I can't see how if you built a venv based on a binary your compiled youself it would be any different. If you want to try an experiment, see if you can run /data/project/spi-tools-dev/python-distros/Python-3.7.3-install/bin/python3 (I think the permissions should allow it). And assuming you can, try building a venv against that binary and see if it works for you on the grid.
WARNING: this is just a quick-and-dirty suggestion for testing that it works. I absolutely don't guarantee that I won't blow away that directory tomorrow, so if the experiment works and you want to go this route for your production job, you'll totally want to build your own.
Also, please note, I'm just speaking as a Toolforge user. I can't promise what the folks who run Toolforge think about this. They haven't told me I can't do it, but I don't know if they're totally on board with the idea.