Infrastructure

We specialize in “infrastructure” – products and services that necessarily have aspects of both networks and systems. It’s a great area for a small and specialized company because relatively few people understand both topics well, but you might wonder how big a niche it is – it sounds pretty geeky and specialized. We got into the space by chance having decided to broaden a specific study of Internet Caching into a more general report. Then we did research on load balancing because it seemed like another topic we have background in.

But if you look at the areas that we’ve examined in detail over the years, you’ll see it’s probably much broader than you mighthave expected…

IRG’s first research was into the emerging Internet Caching market in 1997.  From that first report until today we have emphasized research into “infrastructure” topics – topics at the boundary of networks and systems. Fully understanding these market opportunities require fluency and understanding in both networks and systems perfectly suited to our “bilingual” history and interests.

Infrastructure doesn’t stand still. Over the years we’ve studied a diversity of emerging markets that all fall into this general category:

•Internet Caching and Content Delivery: in 1997 this was largely the commercial adaptation of the Open Source SQUID software for HTTP object caching. Since then network acceleration has gone far beyond HTTP caching but object caching continues to be a key element in CDN or Cloud acceleration

•Traffic Management: Load balancing is a basic datacenter tool for availability and scaling, as well as a fundamental tool for performance acceleration within the Internet (e.g., clever uses of DNS to pick a local instance).

•Route Optimization: For a while companies like Route Science and netVMG tried to create a market for picking among multiple network providers for optimal performance. A market for these products never materialized but the technology is broadly available (e.g., in Cisco’s Performance Optimized Routing).

•WAN Acceleration: When we started, Expand was the market leader, and then Peribit, and now it’s dominated by Cisco and Riverbed. Microsoft also participates in an interesting way, fixing many of the problems that create the need for these add on tools (e.g., CIFS chattiness) however the timescale of Microsoft’s impact is very long (5-10 years).

•Branch Office Architecture and Data Center Consolidation: Network performance issues encourage putting applications and data near the user; the economics of support and security encourage consolidating applications and data into a few large data centers. Clearly these are strongly related topics.

•Security and Identity: The network is the greatest source of security threats. Devices that can perform real-time traffic inspection and processing on network flows (firewalls, intrusion detection and protection systems, proxy and caching systems) play a fundamental role in providing security services. Identity is a critical part of the network infrastructure in order for secure applications to be constructed.

•Video Delivery: Video attracts a lot of attention in the network equipment and service provider because of the amount of bandwidth consumed. Creating high-quality video streams cost efficiently is a technical challenge. Understanding how to monetize Internet video is a great business question.

•Collaboration: As it turns out, IRG has done research in collaboration from the beginning. Today collaboration is very much at the center of our interests.  Voice and video are demanding network applications; Cisco believes that human collaboration enabled by the network will be the next wave of the network broadly influencing the world; collaborative applications with global user populations is one of the clear use cases for Cloud computing.

•Application Delivery: Application Delivery is the business pioneered by Citrix with MetaFrame/Presentation Server/Xen App. This is a delightfully complicated and complex area simply because one size by no means fits all. The technologies range from remote desktop delivery, virtual desktops, dynamic application loading and WAN optimization.

•Virtualization: We started to get seriously involved with virtualization when VMware presented the concept of virtual appliances in 2006. Since then virtualization has only grown in importance. It fits nicely into our area of focus both because of the data center networking issues (e.g., evolution to 10G Ethernet and converged SAN and LAN networking) and because of the WAN optimization opportunities the use of data center computing to a global user base presents.

•Data Center Networking: The network is a critical part of a large data center and the network is increasingly inseparable from the platforms and systems running in the data center. In 2009 we started to see real movement toward 10G Ethernet, as well as Cisco’s initial drive toward converging SAN traffic onto IP networking as well as the first network systems designed to optimize virtualized computing.

•Cloud Computing/Utility Computing: Virtualization in turn has enabled exciting new forms of application delivery including shared private dynamic data centers (Utility Computing) and shared public data centers (“Cloud” computing). These developments are both heavily dependent on data center networking and application delivery technology.

•OpenFlow – Starting as an effort among leading network researchers to make it easier to build and test research networks at real-world scale, OpenFlow has become an initiative with major economic implications for large data center design and operation and beyond.

•Big Data – Originally created by Google in 2004, MapReduce is a framework for managing distributed computing clusters processing large amounts of data.  Subsequently Yahoo! supported the creation of an open source version called Hadoop.  This in turn has been a catalyst for a comprehensive set of tools designed to help organizations process large data sets which previously could not affordably be used.

^ Back to top