Posts

The Apache Projects – The Justice League Of Scalability

Captain Apache

In this post I will define what I believe to be the most important projects within the Apache Projects for building scalable web sites and generally managing large volumes of data.

If you are not aware of the Apache Projects, then you should be. Do not mistake this for The Apache Web (HTTPD) Server, which is just one of the many projects in the Apache Software Foundation. Each project is it’s own open-source movement and, while Java is often the language of choice, it may have been created in any language.

I often see developers working on solutions that can be easily solved by using one the tools in the Apache Projects toolbox. The goal of this post is to raise awareness of these great pieces of open-source software. If this is not new to you, then hopefully it will be a good recap of some of the key projects for building scalable solutions.

The Apache Software Foundation
provides support for the Apache community of open-source software projects.

You have probably heard of many of these projects, such as Cassandra or Hadoop, but maybe you did not realize that they all come under the same umbrella. This umbrella is known as the Apache Projects and is a great place to keep tabs on existing and new projects. It is also a gold seal of approval in the open-source world. I think of these projects, or tools, as the superheroes of building modern scalable websites. By day they are just a list of open-source projects listed on a rather dry and hard to navigate website at https://www.apache.org. But by night… they do battle with some of the world’s most gnarly datasets. Terabytes and even petabytes are nothing to these guys. They are the nemesis of high throughput, the digg effect, and the dreaded query-per-second upper limit. They laugh in the face of limited resources. Ok, maybe I am getting carried away here. Let’s move on…

Joining The Justice League

Prior to a project joining the Apache Projects it will first be accepted into the Apache Incubator. For instance, Deltacloud has recently been accepted into the Apache Incubator.

The Apache Incubator has two primary goals:

  • Ensure all donations are in accordance with the ASF legal standards
  • Develop new communities that adhere to our guiding principles

Apache Incubator Website

You can find a list all the projects currently in the Apache Incubator here.

Our Heroes

Here is a list, in no particular order, of tools you will find in the Apache Software Foundation Projects.

Kryptonite

While each have their major benefits, none is a “silver bullet” or a “golden hammer”. When you design software that scales or does any job very well, then you have to make certain choices. If you are not using the software for the task it is was designed for then you will obviously find it’s weakness. Understanding and balancing the strengths and weaknesses of various solutions will enable you to better design your scalable architecture. Just because Facebook uses Cassandra to do battle with it’s petabytes of data, does not necessarily mean it will be a good solution for your “simple” terabyte-wrestling architecture. What you do with the data is often more important than how much data you have. For instance, Facebook has decided that HBase is now a better solution for many of their needs. [This is the cue for everyone to run to the other side of the boat].

Kryptonite is one of the few things that can kill Superman.

The word kryptonite is also used in modern speech as a synonym for Achilles’ heel, the one weakness of an otherwise invulnerable hero.

“Krytonite” by Wikipedia

Now, let’s look in more detail at some of the projects that can fly faster than a speeding bullet and leap tall datasets is a single [multi-server, distributed, parallelized, fault-tolerant, load-balanced and adequately redundant] bound.

Apache Cassandra

Cassandra was built by Facebook to hold it’s ridiculous volumes of data within it’s email system. It’s much a like distributed key-value store, but with a hierarchy. The model is very similar to most NoSQL databases. The data-model consists of columns, column-families and super-columns. I will not go into detail here about the data-model, but there is a great intro (see “WTF is a SuperColumn? An Intro to the Cassandra Data Model“) that you can read.

Cassandra can handle fast writes and reads, but its Kryptonite is consistency. It takes time to make sure all the nodes serving the data have the same value. For this reason Facebook is now moving away from Cassandra for its new messaging system, to HBase. HBase is a NoSQL database built on-top of Hadoop. More on this below.

Apache Hadoop

Apache Hadoop, son of Apache Nutch and later raised under the watchful eye of Yahoo, has since become an Apache Project. Nutch is an open-source web search project built on Lucene. The component of Nutch that became Hadoop gave Nutch it’s “web-scale”.

Hadoop’s goal was to manage large volumes of data on commodity hardware, such as thousands of desktop harddrives. Hadoop takes much of it’s design from a paper published by Google on their Bigtable. It stores data on the Hadoop Distributed File-System (a.k.a. “HDFS”) and manages the running of distributed Map-Reduce jobs. In a previous post I gave an example using Ruby with Hadoop to perform Map-Reduce jobs.

Map-Reduce is a way to crunch large datasets using two simple algorithms (“map” and “reduce”). You write these algorithms specific to the data you are processing. Although the your map and reduce code can be extremely simple, it scales across the entire dataset using Hadoop. This applies even if you have petabytes of data across thousands of machines. Your resulting data can be found in a directory on you HDFS disk when the Map-Reduce job completes. Hadoop provides some great web-based tools for visualizing your cluster and monitoring the progress of any running jobs.

Hadoop deals with very large chunks of data. You can tell Hadoop to Map-Reduce across everything it finds under a specific directory within HDFS and then output the results to another directory within HDFS. Hadoop likes [really really] large files (many gigabytes) made from large chunks (eg. 128Mb), so it can stream through them quickly, without many disk-seeks and manage distributing the chunks effectively.

Hadoop’s Kryptonite would be it’s brute force. It is designed for churning through large volumes of data, rather than being real-time. A common use-case is to spool up data for a period of time (an hour) and then run your map-reduce jobs on that data. By doing this you can very efficiently process vasts amounts of data, but you would not have real-time results.

Recommended Reading
Hadoop: The Definitive Guide by Tom White

Apache HBase

Apache HBase is a NoSQL layer on-top of Hadoop that adds structure to your data. HBase uses write-ahead logging to manage writes, which are then merged down to HDFS. A client request is responded to as soon as the update is written to the write-ahead log and the change is made in memory. This means that updates are very fast. The read side is also fast, since data is stored on disk in key order. Subsequently, because data is stored on disk in key-order, scans across sequential keys are fast, due to the low number disk seeks required. Larger scans are not currently possible.

HBase’s Krytonite would be similar to most NoSQL databases out there. Many use-cases still benefit from using relational database and HBase is not a relational database.

Look Out For This Book Coming Soon
HBase: The Definitive Guide by Lars George (May 2011)

Lars has an excellent blog that covers Hadoop and HBase thoroughly.

Apache ZooKeeper

Apache ZooKeeper is the janitor of our Justice League. It is being used more and more in scalable applications such as Apache HBase, Apache Solr (see below) and Katta. It manages an application’s distributed needs, such as configuration, naming and synchronization. All these tasks are important when you have a large cluster with constantly failing disks, failing servers, replication and shifting roles between your nodes.

Apache Solr

Apache Solr is built on-top of one of my favorite Apache Projects, Apache Lucene [Java]. Lucene is a powerful search engine API written in Java. I have built large distributed search engines with Lucene and have been very happy with the results.

Solr packages up Lucene as a product that can be used stand-alone. It provides various ways to interface with the search engine, such as via XML or JSON requests. Therefore, Java knowledge is not a requirement for using it. It adds a layer to Lucene that makes it more easily scale across a cluster of machines.

Apache ActiveMQ

Apache ActiveMQ is a messaging queue much like RabbitMQ or ZeroMQ. I mention ActiveMQ because I has used it, but it is definitely worth checking out the alternatives.

A message queue is way to quickly collect data, funnel the data through your system and use the same information for multiple services. This provides separation, within your architecture, between collecting data and using it. Data can be entered into different queues (data streams). Different clients can subscribe to these queues and use the data as they wish.

ActiveMQ has two types of queue, “queue” and “topic”.

The queue type “queue” means that each piece of data on the queue can only be read once. If client “A” reads a piece of data off the queue then client “B” cannot read it, but can read the next item on the queue. This is a good way of dividing up data across a cluster. All the clients in the cluster will take a share of the data and process it, but the whole dataset will only be processed once. Faster clients will take a larger share of the data and slow clients will not hold-up the queue.

A “topic” means that each client subscribed will see all the data, regardless of what the other clients do. This is useful if you have different services all requiring the same dataset. It can be collected and managed once by ActiveMQ, but utilized by multiple processors. Slow clients can cause this type of queue to back-up.

If you are interested in messaging queues then I suggest checking out these Message Queue Evaluation Notes by SecondLife, who are heavy users of messaging queues.

Apache Mahout

The son of Lucene and now a Hadoop side-kick, Apache Mahout was born to be an intelligence engine. From the Hindi word for “elephant driver” (Hadoop being the elephant), Mahout has grown into a top-level Apache Project in it’s own right, mastering the art of Artificial Intelligence on large datasets. While Hadoop can tackle the more heavy-weight datasets on it’s own, more cunning datasets require a little more algorithmic manipulation. Much of the focus of Mahout is on large datasets using Map-Reduce on-top of Hadoop, but the code-base is optimized to run well on non-distributed datasets as well.

We appreciate your comments

If you found this blog post useful then please leave a comment below. I would like to hear which other Apache Projects you think deserve more attention and if you have ever been saved, like Lois Lane, by one of the above.

Resources

Comments

  1. Allen Wittenauer

    A slight correction.

    Apache Hadoop came from Apache Lucene. Yahoo! joined the already-in-progress project, adding some much needed know-how around scale. So great are their contributions, that it is a common misconception that Yahoo! started the project.

  2. Hector

    Hi. Facebook only uses cassandra in the inbox feature, so whatever the effect of eventual consistency on cassandra, it would only be noticeable there. Also, they are not moving away from it (as it is still being used). They decided it wasn’t the right fit for the new functionality they were planning.

    Regards

  3. Karthik Shiraly

    Thanks for a great article. I’m very interested in high scalability technologies, and your article gives a real useful overview.

    Both Apache Solr and ActiveMQ have helped reduce development time on my projects. I’d especially like to highlight the fantastic faceted search feature of Solr – that’s thousands of DB ‘select count’ queries saved there!

    The Apache Hive project also deserves a mention in your list – it adds a query language layer of abstraction over hadoop, that makes writing map reduce jobs for data analytics easier and intuitive.

  4. Otis Gospodnetic

    Actually, I’d say Hadoop came from Apache Nutch, where the HDFS predecessor was first created.