Hello Ahmed, nice to meet you!
As a data analyst who constantly works with the edit data, I would love to have it updated daily too. But there are serious infrastructural limitations that make that very difficult.
Both the edit data and pageview data that you're talking about come from the Hadoop-based
Analytics Data Lake. However, because of limitations in the underlying
MediaWiki application databases that Hive pulls edit data from, the data requires some
complex reconstruction and denormalization that takes several days to a week. This mostly affects the historical data, but the reconstruction currently has to be done for all history at once because historical data sometimes changes long after the fact in the MediaWiki databases. So the entire dataset is regenerated every month, which would be impossible to do daily.
I'm sure there are strategies that could ultimately fix these problems, but I'm also sure that they would take great effort to implement, so unfortunately that's unlikely to happen anytime soon.
In the meantime, you may be able to work around these issues by using the
public replicas of the application databases. Unlike with the API, you'd have to do the computation yourself, but it is updated in (near) real-time.
Quarry is an excellent, easy-to-use tool for running SQL queries on those replicas.
I'm not an expert on the Data Lake, but I'm pretty sure this is broadly accurate. Corrections from the Analytics team welcome :)