Hey ,

I'm setting up the database from wikipedia XML dumps. As you know that if we import all revision dumps then it may size to 22 - 25 terabytes Aprox.

This size is too huge, what is workaround for it ??

If it is necessary to dumps all xmls to DB then i think linux only support up to 16 TB maximum. SO how we can import into single MYSQL server deployed on linux OS.

Please some one also tell me that how currently wikipedia it self managing such huge data in terms of software and hardware solutions. 

Regards,
IMran