On Thu, Sep 23, 2010 at 7:19 PM, Dan Nessett dnessett@yahoo.com wrote:
I appreciate your recent help, so I am going to ignore the tone of your last message and focus on issues. While a test run can set up, use and then delete the temporary resources it needs (i.e., db, images directory, etc.), you really haven't answered the question I posed. If the test run ends abnormally, then it will not delete those resources. There has to be a way to garbage collect orphaned dbs, images directories and cache entries.
Any introductory Unix sysadmin handbook will include examples of shell scripts to find old directories and remove them, etc. For that matter you could simply delete *all* the databases and files on the test machine every day before test runs start, and not spend even a second of effort worrying about cleaning up individual runs.
Since each test database is a fresh slate, there is no shared state between runs -- there is *no* need to clean up immediately between runs or between test sets.
My personal view is we should start out simple (as you originally suggested) with a set of fixed URLs that are used serially by test runs. Implementing this is probably the easiest option and would allow us to get something up and running quickly. This approach doesn't require significant development, although it does require a way to control access to the URLs so test runs don't step on each other.
What you suggest is more difficult and harder to implement than creating a fresh database for each test run, and gives no clear benefit in exchange.
Keep it simple by *not* implementing this idea of a fixed set of URLs which must be locked and multiplexed. Creating a fresh database & directory for each run does not require any additional development. It does not require devising any access control. It does not require devising a special way to clean up resources or restore state.
-- brion