From: analytics-bounces@lists.wikimedia.org [mailto:analytics-bounces@lists.wikimedia.org] On Behalf Of David Schoonover
Sent: Tuesday, November 13, 2012 10:14 PM
To: Analytics List
Subject: Re: [Analytics] Kraken Hardware Assignments

 

Did we scale down on initial investment? In the hardware planning of March the system was envisioned to "still handle 2013 traffic growth".

 

1. Yes, the hardware funding we received was roughly "Tranche B" from our Hardware Planning docs:

https://www.mediawiki.org/wiki/Analytics/2012-2013_Roadmap/Hardware

 

                The quoted comment was about Tranche B, so this still is a setback.

 

--

If we would sample say 1:10 (and have 1:1 for limited ranges) we would have enough storage for years.

 

2. Our goal is Zero Sampling. The simplest reason is obvious: you can always throw stuff away later, but sampling is forever.

 

Sampling precludes any rich, nested query on the datastream, as well as rendering meaningless low-traffic streams (like unique viewers on long-tail wiki pages). The only alternative is to, as you say, protect certain datastreams, which is a maintenance headache for per-page analytics, and cannot be done retroactively for ad-hoc questions. On the other hand, you can always throw stuff away.

 

                Yes, you can. At the cost of overhauling the basic architecture to some extent, which is more difficult once the system is in place.
                I'd dub this "Don't worry, we'll fix it" look at life.

 

                Without some naive optimism there would be no Wikipedia, much too complicated and fail-prone.

                Yet the same 'worry later' attitude brought us wiki-syntax and we're fixing it for 10 years already.

                I may be old school but I always believed that ICT experts are paid to worry, about security, robustness, costs.

                Arguably next to creativity worrying is our second core competence.  ;-)

 

Big Data is all about exploration, so I'm strongly opposed to sampling. If, somehow, we were to run low on space, we have a number of tricks that could losslessly reduce the data footprint at the cost of complexity (like data-tailored record bundling, or building the perfect adaptive PPM encoder for each file). If ultimately our cleverness failed us, we'd hang our heads in shame and prune historical data (but retain the rollups).

 

 

 

-- 

David Schoonover

dsc@wikimedia.org

 

On Tuesday, 13 November 2012 at 12:43 p, Erik Zachte wrote:

> "Awesome. Even given aggregate datasets from jobs, I think we're probably okay for space for at least 6 months. That's plenty of time evidence the value of analytics, and make a strong case for more disks. July is ~8 months away, so the timing lines up perfectly."

 

Forgive me for seeing this as somewhat less than awesome.

 

Did we scale down on initial investment? In the hardware planning of March the system was envisioned to "still handle 2013 traffic growth".
If our fundraiser exceeds expectations as it usually does, you're probably right, extra expenses will be approved. However that is still unknown.

If we would sample say 1:10 (and have 1:1 for limited ranges) we would have enough storage for years. Just reminding.

I'm not qualified to set overall priorities, but this is the time to vet basic assumptions.

Or at least make budgetary consequences a bit more explicit.

Erik

 

From: analytics-bounces@lists.wikimedia.org [mailto:analytics-bounces@lists.wikimedia.org] On Behalf Of David Schoonover
Sent: Tuesday, November 13, 2012 8:07 PM
To: Analytics List
Subject: Re: [Analytics] Kraken Hardware Assignments

 

Awesome job! A couple of comments on the allocations:

 

- NameNode

 

1. I think we definitely want to run the NameNode on a Cisco machine -- NN latency effects responsiveness across all of HDFS as it's used to address everything. We're also running the menagerie of Hadoop-related external dependencies on this machine (e.g., a MySQL for Hive and Oozie, etc). I'd prefer not to tempt fate by co-locating anything with the primary NameNode.

2. an01 is currently has our cluster's public IP, and thus hosts a bunch of utilities, making it a DDoS waiting to happen. We should move the NN to an10 (or whatever other Cisco machine pleases you), as that's way easier than moving the IP. Which brings up...

4. NameNode Frailty. Generally speaking, I think "NameNode is Hadoop's SPOF" tends to be a bit overblown. Our cluster isn't on the critical path of any client-facing operation. It's also not terribly big, so any hardware dedicated to a hot spare (via AvatarNode (iirc) Facebook's High Availability NameNode or nfs or whatever) would cut into the jobs we can realistically service on a daily basis. Once we're sure we have the metal to stay ahead of demand we should revisit this, but I don't think we're there yet.

4. The Secondary NN is essentially the spare tire in the trunk -- it [sadly] gets no load so long as the primary is up -- so we can probably run on a R720 alongside the Hadoop Miscellany and the ETL Nimbus (Storm's JobTracker).

 

- Kafka. I think we should probably move our primary Kafka brokers to Cisco machines for the same reason I agree we should run ETL on those machines. Both systems are hard-realtime--they need to run ahead of the incoming data stream or we'll be forced to drop data. The more RAM, the more cores, the better; headroom here is important as load will be variable.

 

- Monitoring. We plan to run both Ganglia and some kind of application-level JMX monitoring. Though these services tend to use a decent chunk of network, they're not otherwise terribly resource hungry. We can probably get away with sticking both on a R720 and otherwise reserve that box for staging and other ops utility work.

 

- Storage

If we snappy compress and do more rounding up, that's 7 TB / week*.

Awesome. Even given aggregate datasets from jobs, I think we're probably okay for space for at least 6 months. That's plenty of time evidence the value of analytics, and make a strong case for more disks. July is ~8 months away, so the timing lines up perfectly.

 

 

Everything else looks great. I think this leaves the cluster looking like this:

 

an01-an07 (7): ETL Workers

an08, an09 (2): Kafka Brokers

an10 (1): Primary NameNode

an11-an22 (12): Hadoop Workers & DataNodes

an23-an25 (3): Cluster-wide ZooKeepers

an26 (1): Monitoring (JMX & Ganglia)

an27 (1): Secondary NameNode, Hive Metastore (etc), Storm Nimbus

 

 

 

-- 

David Schoonover

 

On Monday, 12 November 2012 at 9:08 a, Andrew Otto wrote:

Woooweeeee!

 

Now that we've got all of our servers up and running, let's take a minute to assign them all their official roles.

 

Summary of what we've got:

 

analytics1001 - analytics1010:

Cisco UCS C250 M1

192G RAM

8 x 300G = 2.4T

24 core X5650 @ 2.67 GHz

 

analytics1011 - analytics1022:

Dell Poweredge R720

48G RAM

12 * 2T = 24T

12 core EW-2620 @ 2.00GHz

 

analytics1023 - analytics1027:

Dell PowerEdge R310

8G RAM

2 * 1G = 2G

4 core X3430 @ 2.40GHz

 

an11 - an22 are easy. They should be Hadoop Worker (HDFS) nodes, since they have so much storage space!

 

an23-an27 are relative weakling and should not be used for compute or data needs. I've currently got Zookeepers running on an23, an24 and an25 (we need 3 for a quorum), and I think we should keep it that way.

 

The remaining assignments require more discussion. Our NameNode is currently on an01, but as Diederik pointed out last week, this is a bit of a waste of a node, since it is so beefy. I'd like to suggest that we use an26 and an27 for NameNode and backup NameNode.

 

My rudimentary Snappy compression test reduces web access log files to about 33% of there original size. According the the unsampled file we saved back in August, uncompressed web request logs generate about 100 GB / hour. Rounded (way) up, that's 20 TB / week.

 

If we snappy compress and do more rounding up, that's 7 TB / week*.

 

 

 

an11, an12: Kafka Brokers

I had wanted to use all of the R720s as hadoop workers, but we'd like to be able to store a week's worth of Kafka log buffer. There isn't enough storage space on the other machines to do this, so I think we should use two of these as Kafka brokers. If we RAID 1 the buffer drives (which we probably should), that makes the Kafka buffer 10 TB (2 nodes * 10 2 TB drives / 2 (for RAID)), which should be enough to cover us for a while.

 

 

an01 - an10: Storm/ETL

These are beefy (tons of RAM, 24 core), so these will be good for hefty realtime stuff. We could also take a few of these and use them as Hadoop workers, but since they don't really have that much space to add to the HDFS pool, I'm not sure if it is worth it.

 

 

an23, an25: ZooKeepers

As I said above, let's keep the ZKs here.

 

 

an26, an27: Hadoop Masters

Move the NameNodes (primary and secondary/failover) here.

 

 

an13 - an22: Hadoop Workers

We need to use the first 2 drives in RAID 1 for the OS, so really we only have 10 drives for HDFS space. Still, that gives us 200 TB. With an HDFS replication factor of 3, that's 67 TB HDFS.

 

 

 

Thoughts? Since we'll def want to use an13-an22 as workers, I'll start spawning those up and adding them to the cluster today. Yeehaw!

 

-Ao

 

 

 

 

*For the sake of simplicity, I'm not counting other input sources (event log, sqoop, etc.), and instead hoping that rounding up as much as I did will cover these needs.

 

 

 

 

 

 

_______________________________________________

Analytics mailing list

 

_______________________________________________

Analytics mailing list

Analytics@lists.wikimedia.org

https://lists.wikimedia.org/mailman/listinfo/analytics