Discovery-alerts March 2019

discovery-alerts@lists.wikimedia.org
  • 4 participants
  • 58 discussions

** PROBLEM alert - search.svc.codfw.wmnet/ElasticSearch health check for shards on 9243 is CRITICAL **
by nagios@icinga1001.wikimedia.org
3 years, 8 months

** PROBLEM alert - search.svc.codfw.wmnet/ElasticSearch health check for shards on 9443 is CRITICAL **
by nagios@icinga1001.wikimedia.org
3 years, 8 months

** PROBLEM alert - search.svc.codfw.wmnet/ElasticSearch health check for shards on 9643 is CRITICAL **
by nagios@icinga1001.wikimedia.org
3 years, 8 months

** ACKNOWLEDGEMENT alert - icinga1001/Mjolnir bulk update failure check - eqiad is CRITICAL **
by nagios@icinga1001.wikimedia.org
3 years, 8 months

** PROBLEM alert - icinga1001/Mjolnir bulk update failure check - eqiad is CRITICAL **
by nagios@icinga1001.wikimedia.org
3 years, 8 months

Cron <hdfs@hadoop-coordinator-2> export PYTHONPATH=${PYTHONPATH}:/srv/deployment/analytics/refinery/python && /srv/deployment/analytics/refinery/bin/refinery-drop-hive-partitions -d 90 -D discovery -t query_clicks_hourly, query_clicks_daily >> /var/log/refinery/drop-query-clicks.log
by root@hadoop-coordinator-2.analytics.eqiad.wmflabs
3 years, 8 months

** PROBLEM alert - icinga1001/Mjolnir bulk update failure check - codfw is CRITICAL **
by nagios@icinga1001.wikimedia.org
3 years, 8 months

** PROBLEM alert - icinga1001/Mjolnir bulk update failure check - codfw is CRITICAL **
by nagios@icinga1001.wikimedia.org
3 years, 8 months

** RECOVERY alert - search.svc.codfw.wmnet/ElasticSearch health check for frozen writes - 9243 is OK **
by nagios@icinga1001.wikimedia.org
3 years, 8 months

** PROBLEM alert - search.svc.codfw.wmnet/ElasticSearch health check for frozen writes - 9243 is CRITICAL **
by nagios@icinga1001.wikimedia.org
3 years, 8 months
Results per page: