NETSCOUT embraces disruption by porting packet flow visibility software to Open Compute platforms

NETSCOUT recently announced new nGenius PFS 5000 network packet brokers based on off-the-shelf Open Compute platforms. While Big Switch Networks blazed this trail back in 2013 with the launch of its Big Monitoring Fabric, NETSCOUT, which sells a family of purpose-built network packet broker platforms, is embracing disruption by porting its packet flow visibility software to white box switches, providing customers with a more cost-effective, easily scalable solution for network-wide visibility.

The benefits of this approach are described in the ACG white paper I authored: “Open Compute Platforms Power Software-Driven Packet Flow Visibility”.

Note that while Big Switch’s BMF is based on the classic SDN architecture using a central controller and the OpenFlow protocol, NETSCOUT has taken a different approach with the PFS 5000, which is based on a fully distributed, mesh architecture that is self-organizing and self-healing.

It will be interesting to watch this market segment evolve as the power of switching platforms based on merchant silicon continues to increase and other network packet broker vendors embrace the disruption of Open Compute.

 

Visibility & analytics at the ONUG Spring 2017 conference

I was invited to speak at the Open Networking User Group’s ONUG Spring 2017 conference held in San Francisco back in April about “A Framework for Infrastructure Visibility, Analytics and Operational Intelligence”. My presentation is up on Slideshare and ONUG has posted a video of the session.

My goal was to stimulate thinking about how we bring the power of Big Data to infrastructure monitoring and analytics by creating a common framework for tools to share visibility data from an array of sources and feed this data into a set of shared analytics engines to support various operational use cases.

It’s not economically feasible, nor is it technically desirable, for each tool to bring its own Big Data analytics stack and ingest dedicated streaming telemetry feeds. As an industry, we need to think about how we can create more commonality at the lower layers of the stack to implement lower cost solutions that facilitate data sharing and common analytics across a wide range of use cases.

On this front, ONUG has a Monitoring & Analytics initiative that is working to define user requirements and develop proof-of-concept demos for a new, comprehensive suite of tools to manage software-defined infrastructure.  There was a panel at the conference that provided an update on the status of the initiative, and ONUG has posted a video of this session.

I also moderated an interesting panel discussion on Retooling for the Software-Defined Enterprise that featured Aryo Kresnadi from FedEx, Ian Flint from Yahoo and Dan Ellis from Kentik, who all have extensive experience using and building monitoring & analytics tools in cloud-scale environments. ONUG has also posted a video of this session, along with many others from the conference on ONUG’s Vimeo channel.

If these topics interest you, be sure to save the date for ONUG Fall 2017, which will be held October 17 & 18 in New York City.

Cloud-scale technologies for cloud-scale infrastructure visibility & analytics

I think we can all agree that cloud-scale technologies are wonderful things, enabling hyper-agile delivery of applications and services to billions of users worldwide. Software-defined networking, virtualization, microservices, containers, open source software and Open Compute platforms are enabling cloud service providers to achieve mind-boggling economies of scale while keeping pace with insatiable user demand.

However, as telecom service providers and large-scale enterprises move to embrace cloud-scale technologies, they are proving to be both a blessing and a curse. The benefits are straightforward: rapidly deliver a broader range of applications and services at lower cost while being able to quickly respond to changing customer needs. The downside is that both service providers and enterprises need to employ new toolsets for developing, deploying and managing these applications and services.

Disaggregation and decomposition are consistent themes for cloud-scale technology. Monolithic platforms are separated into a software-driven control plane running on commodity hardware platforms. Network functions and computing resources are virtualized and decoupled from the underlying hardware. Monolithic applications are decomposed into many microservices that each run in their own container. The business value in terms of lower hardware costs coupled with increased flexibility and agility is real, but there are added costs associated with managing all these different piece parts.

The problem becomes obvious when service providers and enterprises try to apply existing management tools and methodologies to cloud-scale infrastructure. For all their internal complexity, configuring, monitoring and controlling monolithic platforms and applications is simpler than managing multiple layers of many different software components running on virtualized infrastructure. While the industry has recently made great strides by adopting new tools for cloud-scale infrastructure configuration and orchestration, we are still playing catch-up in terms of equally effective approaches to cloud-scale visibility and analytics.

Yet here is where cloud-scale technologies come to their own rescue. By disaggregating and decomposing software and hardware functions, with the proper instrumentation implemented at each layer and in every component, we are able to gain full visibility into the entire stack from top to bottom, while utilizing new technologies like streaming telemetry to provide extremely granular, real-time visibility into the application and service delivery infrastructure.

Therefore, it’s only natural that cloud-scale visibility and analytics should be implemented on native cloud-scale platforms, leveraging the same technologies: software-defined networking, virtualization, microservices, containers, open source software and Open Compute platforms. This is especially critical when employing Big Data analytics, where the basic technologies are inherently cloud-scale, and well-suited for ingesting Big Data streaming telemetry feeds and performing real-time streaming analytics on this data.

 

Real-time network visibility & analytics for operational intelligence

When I launched my network analytics practice for ACG just over a year ago, I decided that my initial research needed to focus on the value of real-time network visibility and Big Data analytics for operational intelligence. SDN, virtualization and the widespread adoption of cloud-scale technologies are enabling new techniques, including streaming telemetry, for instrumenting networks and gaining real-time visibility into traffic flows and network state. At the same time, streaming analytics allows network operators to immediately turn insights into action within seconds or minutes instead of hours or days. Big Data also supports the large-scale data sets needed to apply machine learning techniques for predictive analytics and AI-based network automation.

The ROI for real-time operational intelligence is compelling across a wide array of use cases, including: rapid root cause analysis and reduced mean time to repair (MTTR); immediate detection of security threats inside the network perimeter; real-time performance monitoring for on-the-fly traffic engineering; continuous KPI monitoring for service assurance; and the holy grail: closed-loop feedback for analytics-driven automation. The potential gains are huge and the industry is witnessing a new wave of innovation that will enable us to reinvent how networks are deployed and operated, and how services are delivered and managed.

Network operators are leveraging new real-time visibility and analytics technologies in three separate, but interconnected, domains:

  • Telecom network and communication services
  • Cloud-scale services delivered via the Internet
  • Hyperscale data center infrastructure

Therefore, my research in operational intelligence has separate tracks for covering developments in each domain, although there is overlap between tracks. For example, new telecom services are being delivered via the cloud, and SD-WANs are telecom services that use the Internet to connect users to applications in the cloud. The cloud-scale services track looks at visibility and analytics from the perspective of the network operator who is delivering or consuming services over networks that the operator doesn’t own or operate, whereas the hyperscale data center track looks at the role of visibility and analytics to manage that infrastructure, which is used for delivering cloud-scale services.

As a result, my research spans three separate, but interrelated, markets:

  • Telecom services
  • Cloud-scale services
  • Enterprise IT

While today these are three distinct markets, over the next decade I expect the lines will blur as the industry converges on delivering the majority of applications and services via public and hybrid clouds. Picture one vast market – cloud-scale services – segmented by application type: consumer, enterprise IT, communications, IoT, etc. At this point, the network simply provides access and transport for user devices, machines and sensors to connect with applications running in the cloud.

As an industry, we need to solve many technical problems in order to get there, with security being the most significant challenge, but today’s breakthroughs in real-time network visibility and Big Data analytics will play a key role in realizing this vision.