I've been a skeptic of the Wikidata project at times but over the long run I have seem them do the right things and get results.
The trouble with Freebase is that it never had much of a community. Part of it was that it didn't need it because they had paid curators, but there were many other factors that made things difficult, but primarily if i need to make a data set to do task A, I can get maybe 50-90% of the way there with Freebase, but then I have to my own curation and I may need to disagree with Freebase about particular things and just not deal with the whole hassle of reconciling with the source system.
Also the cc-by license let people use Freebase data without contributing anything substantial back; this certainly led to more people using Freebase, but some kind of viral license might force organizations to make contributions back.
On a technical basis, the continuation of Freebase in the sense of a live and updated :BaseKB could be done by a few measures: (i) you need some system for controlling the issue of mid identifiers, (ii) you can then add and remove any facts you want, and (iii) if there was a good RDF-based "data wiki" that scales to the right size you should be able to load it in there and go.
The funding issue for that looks tricky to me. I think both consumers and funding agencies want to see something that addresses some particular problem, so what you really need to do is to do what Google did, and use Freebase to not as a fish that was caught for you, but as way to learn how to fish.