In July, Facebook explained how it moved its 30-petabyte Hadoop cluster without taking it offline.
In September, it talked about a system called FBAR that helps automate the resolution of system errors to the point that two administrators can manage half of Facebook’s massive infrastructure.
But that’s just in the last two months.
In May, Facebook detailed how it moved operations into a new data center thanks in part to a homegrown configuration, provisioning and testing tool called Kobold.
Over the past few years, it has blessed data types with a plethora of entirely new products and techniques,
but Facebook has undeniably done masterful work to make an old database work at a scale for which it was never designed.
Other companies likely would, and certainly should, be willing to pay large sums of money for Facebook’s webscale expertise. Twitter, Reddit and — just a few days into its life as a cloud provider — Apple have already established reputations for shoddy uptime. Other growing companies such as Zynga and LinkedIn, and even the next generation of webscale companies, are also going to run into the same problems that Facebook has. Why recreate the wheel trying to solve problems Facebook has already solved?
It’s already happening elsewhere. Google has converted its deep expertise in running a webscale search engine into its wide array of enterprise services that includes Google Apps and App Engine.
Yahoo spun off Hortonworks to capitalize on its extensive Hadoop knowledge. These companies had developed internal skill sets in next-generation technologies, and when markets emerged for those skills, they productized them.
Systems management software and support is a huge market, but few, if any, legacy vendors have products and knowledge that easily translate into webscale environments. Facebook could stand to make a lot of money by consulting with customers on how to build their data centers and architect their applications, and then selling them the software tools to keep those apps up and running.