It would be a quick'n'dirty solution. But it highlights an issue: We'd have the same problem with manual descriptions, if they were to arrive in large numbers.
There's always Yet Another Table. Maybe a description would be generated on-the-fly only if a Wikidata page is visited in a language, and removed after ~1 month of "non-viewing"? That should keep the table short enough, but would require extra effort for API calls and dumps, provided those should show descriptions for /all/ languages.
Then again there's the Labs hadoop cluster, used for Analytics IIRC. That sounds like a way to process and store vast amounts of small, self-contained datasets (description strings). Would tie the solution to Wikimedia, though, and require a lot of engineering effort to get started.