This was going to be my question during the dev summit: why *don't* we do this, and improve our detection over time (and retroactively) instead of relying on user's devices?
Which is a long way of saying: +1 :)
-- Sent from my phone, please excuse brevity. On Feb 3, 2015 12:10 PM, "Max Semenik" maxsem.wiki@gmail.com wrote:
Problem: while apps have face detection available from iOS and Android for use in lead image positioning/cropping, mobile web doesn't have it, and even for apps detection is quite slow, resulting in battery drain and UX-problematic slowdown, especially on low-end Android devices.
With Dmitry's help, I discovered Android's face detection library sources ([1], separated out to a standalone library at [2]). This means that we can build a face detection service and supply its results to all users, be that apps, web or third parties.
Thoughts?
[1] https://android.googlesource.com/platform/external/neven/+/master [2] https://github.com/lqs/neven
-- Best regards, Max Semenik ([[User:MaxSem]])
Mobile-l mailing list Mobile-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/mobile-l