THANK YOU FOR SUBSCRIBING
Jeffrey M.Birnbaum, Co-Founder & CEO, 60East Technologies, Inc.
If you were designing and building the Google infrastructure today, what would you build?It’s an interesting question, and one that doesn’t have a simple answer. The world has changed since the early days of Google. In 1998, a high performance commodity server was a dual-processor system with each processor on a dedicated socket, running at speeds of 200-300MHz. Those systems had 256 –512MB of RAM and a few hundred GB of disk spread across multiple 9GB hard drives running at 7200RPM. 100Mbps Ethernet was the high-end networking protocol. “Modern servers can analyze data and produce results fast enough to fill the network capacity, and that’s enough power for an amazing number of high performance problems” By today’s standards, these systems are memory-hobbled; networks constrained, storage deprived, and suffer from very limited ability to parallelize on a single system. Google solved their problem by building a bigger system composed of these small systems.There wasn’t enough bandwidth on a single system, so they used more systems. A single system didn’t have enough processor power, and could only run a few concurrent processes, so Google ran tasks on more systems. To get capacity, Google added complexity by distributing the work across their fabric, and added cost in the need for more infrastructure and coordinating systems. Google also designed the system to return an answer quickly, even if the answer was based on data minutes, hours, or days out of date. Google’s platform is a monumental engineering achievement, but the question remains: if you were going to build the platform.
Today, would you make the same choices?Today, you can order a single high-end server off the shelf that has more processing power, network bandwidth, memory, and disk capacity than a full rack of servers had in 1998. A high-end server these days has 36 cores spread across 2 sockets with each core running at 2GHz or faster, a terabyte of memory, and dozens of terabytes of storage. 40 gigabit Ethernet is common, with 100 gigabit Ethernet on the horizon. The problems are also tougher. Applications need to do their work faster than ever before. At 60East, we call this the drive to “real-time”. Our customers measure end-to-end performance in microseconds while keeping millions of live records up to date. With our customers, we live high-performance computing in the real world. Like Google, though not yet at the same scale, our customers process message streams at ever increasing pace and volume. Unlike Google, our customers can’t tolerate stale data. Businesses are at stake if their systems can’t keep up. The 60East Advanced Message Processing System (AMPS) processes billions of messages daily, in some of the world’s largest financial institutions. We’ve learned what works, and what doesn’t, through experience. With today’s abundance of capacity, the problem isn’t finding enough processor power or storage to handle the data.