_
When technology is widely adopted, focus moves from functionality toward solving the remaining problems. For the Apache Hadoop project, these remaining problems are many. While Hadoop is the standard for map/reduce in enterprises, it suffers from instability, difficulty of administration, and slowness of implementation. And there are plenty of projects and companies trying to address these issues.

MapR is one such company. M.C. Srivas, the company’s CTO and cofounder, said that when he founded the company, it was designed to tackle the shortcomings of Hadoop without changing the way programmers interacted with it.

“The way Hadoop is deployed nowadays, you hire some Ph.D.s from Stanford and ask them to write your code as well as manage the cluster,” he said. “We want a large ops team to run this, and we want to separate them from the application team.”

But the real secret for MapR isn’t just fixing Hadoop’s problems with its own distribution of map/reduce, it’s also about maintaining API compatibility with Hadoop. “Hadoop is a de facto standard, and we cannot go and change that,” said Srivas. “I want to go with the flow. We are very careful not to change anything or deviate from Hadoop. We keep all the interfaces identical and make changes where we think there’s a technical problem.

“With the file system, we thought its architecture was broken, so we just replaced it, but kept the interfaces identical. We looked at the map/reduce layer, and we saw enormous inefficiencies that we could take care of. We took a different approach. There are lots of internal interfaces people use. We took the open-source stuff and enhanced it significantly. The stuff above these layers, like Hive and Pig and HBase, which is a pretty big NoSQL database, we kept those more or less unchanged. In fact, HBase runs better and is more reliable.”

Additionally, Hadoop has a minor issue with the way it handles its NameNode. This node is the manager node in a Hadoop cluster, and it keeps a list of where all files in the cluster are stored. Hadoop can be configured to have a backup NameNode, but if both of these nodes fail, all the data in a Hadoop cluster instantly vanishes. Or, rather, it becomes unindexed, and might as well be lost.

Also, NameNode issues aren’t considered to be an Achilles heel by Hadoop’s original benefactor, Cloudera. Mike Olsen, CEO of Cloudera, said, “If you’re prudent, you can recover from a NameNode crash quite quickly.

“One thing people often overlook about Hadoop is that it’s incrementally scalable. If you need to add more servers to your cluster without any shutdown, they spin up and begin to participate in the life of the cluster. What’s more common is that you get more data and you have to repartition your data. The NameNode going down is a problem that is actively being repaired.”

Srivas said that every node in a MapR cluster is also a NameNode that contains the entire file directory. Thus, a single NameNode vanishing in a MapR cluster has little effect. For Hadoop users, the current solution is to use Apache ZooKeeper, a node-management system that allows for further control of the various nodes in a cluster, and for the dynamic promotion of nodes to higher positions of authority.

The Lexis of big data
But hot new startups aren’t the only ones getting in on the big data party. LexisNexis, perhaps the original big-data company, recently released its own cluster-based big-data analytics platform to the open-source community. Known as HPCC Systems, this cluster-based platform is built around LexisNexis’ Enterprise Control Language, also known as ECL.

Armando Escalante, CTO and senior vice president of LexisNexis and head of HPCC Systems, said, “We invented this platform that has evolved and grown and gets better each year. We are written in C++, and we have…a data workflow language built for our platform. We developed that 10 years ago, and we have since fixed problems for sorting, linking, joins, etc.”

Perhaps the most intriguing aspect of this recently open-sourced platform is its maturity. “We use this platform to deliver almost 90% of our revenue,” said Escalante. “We’ve been doing big data for many years, before big data was big data. [On Sept. 5], we did the final step, which was putting the source code out. Before it was just binaries and a VM.”

The ECL (used with HPCC) is a high-level language, which Escalante said is similar to SQL. While ECL will be a new language to most developers and business users, he insisted that it’s more palatable for the enterprise business user. “Java’s for programmers, and data analytics people don’t want to know Java,” said Escalante. “On the query side, Hive limits you. You can’t encapsulate, there’s no reusability and it’s not Turing complete.”

Whether or not enterprises sidle up to LexisNexis and MapR, it’s undeniable that Hadoop and map/reduce have become the must-have tools for enterprise analytics. Ping Li, partner at venture capital firm Accel Partners, said that the promise of big-data tools like Hadoop isn’t the same as traditional enterprise software. (Accel Partners is a primary investor in Cloudera.)

“I think Hadoop is not about commoditizing as much as it is about delivering a whole new set of value,” he said. “It’s commoditizing the processing of data. It allows you to process all the data, all the time, whereas before you couldn’t actually do it and be able to afford it. By doing that, that commoditization allows new applications.”

According to Olsen, “We see it now getting deployed in really interesting ways in finance, in retail, in healthcare, in government and elsewhere. I think that older enterprises are experimenting with and in some cases really rolling this stuff out in production. I predict that trend continues. It used to be that all the data in your business was well-structured data. Increasingly people are having to deal with Web logs, documents and sensor data from assembly lines. Universal data is much more interesting than it used to be. You just need a different kind of platform for that.”

Perhaps the only remaining bit of doubt in the world of Hadoop and map/reduce is around the patent Google owns for the technology. But Srivas said that Google has been a responsible citizen with this patent thus far. “The map/reduce patent was granted to Apache. We’re just using that. We have not changed the algorithms inside Hadoop, we just made it very robust and much faster,” he said.

MapR’s vice president of marketing, Jack Norris, said that map/reduce and Hadoop are no longer in the experimental stage. He said they are now mainstream in the enterprise world.

“We’re seeing a real demand for a commercial distribution with enterprise-grade features,” he said. “They’re moving into mission-critical uses. It’s financial services, it’s media, it’s telco, it’s government applications… It’s being used for such a broad array of use cases, broad array of data sources.”