However, given that we now have such a well informed
established practices and good quality checks, it seems unproblematic to
lift the character limit. I don't think there are major technical
reasons for having it. Surely, BlazeGraph (the WMF SPARQL engine) should
not expect texts to be short, and I would be surprised if they did. So I
would not expect problems on this side.
I don't think there should be much trouble in this department. Unless
one is literally trying to download megabytes of data or millions of
items from a query (which we are working on solution for, but not yet)
the size of the string doesn't matter much and there would probably be
no noticeable difference between 400 and 2K strings for most queries I
can think of. Searching within such strings won't probably work very
well but that's not the intent anyway, as I understand.
The only thing I can think of is that we now both store the whole item
as huge blob in the DB (and consequently load it in memory) so if we had
a lot of huge strings it may have negative performance impact. But I
don't think changing a property that is usually one per item from 400
bytes to 2K would change anything.