“Toto, we’re not in Kansas anymore”

Last week I attended the Open Networking User Group (ONUG) workshop held at NYU in Manhattan. One highlight was Lakshmi Subramanian’s presentation on the impressive and relevant work being done by the researchers in NYU’s Open Networks and Big Data Lab. Lakshmi is also spearheading industry education and training programs in networking, cloud computing, security and Big Data to help address the growing technical skills gap that enterprise IT organizations face as they embrace new application development and delivery paradigms that would have been hardly conceivable ten years ago.

ONUG co-chair Nick Lippis kicked off the workshop with an overview of the upcoming ONUG Fall 2017 event, which will be held in New York City October 17 & 18. Nick described how ONUG’s charter now extends beyond open networking to the full stack of software-defined infrastructure needed to deploy and support a myriad of enterprise IT applications in complex hybrid and multi-cloud environments. As Nick was talking, it brought to mind Dorothy’s line after the tornado drops her down in the Land of Oz: “Toto, I have a feeling we’re not in Kansas anymore.”

Faced with a bewildering array of new software technologies and cloud services, combined with the breakneck pace of innovation, there must be times when IT managers feel like they’ve suddenly landed in a metaphorical Oz, but unlike Dorothy, they don’t have magic ruby slippers to transport them safely home to Kansas. Instead, they need to acquire the skills, tools and know-how to thrive in this amazing new world.

Open networking and open compute platforms have proven to be key enablers for migrating enterprise IT applications to the cloud, but ONUG now has four active working groups whose members are collaborating to identify and map out additional user challenges and critical success factors in other areas of interest:

  • Open SD-WAN Exchange (OSE)
  • Monitoring & Analytics (M&A)
  • Software-Defined Security Services (S-DSS)
  • Hybrid Multi-Cloud (HMC)

Software-driven SD-WANs promise to upend the legacy enterprise WAN model and deliver services that are more flexible, adaptable and responsive to the demands of hybrid and multi-cloud applications, while allowing enterprises to leverage ubiquitous, high speed Internet connectivity for SaaS applications and other cloud-based services. However, with so many different vendors developing SD-WAN products and solutions, interoperability is a key concern for enterprise users.

The M&A working group is looking at the tools and techniques needed for application, infrastructure and network monitoring, including new technologies like software-based instrumentation, streaming telemetry, Big Data and real-time analytics. Monitoring needs to extend from the legacy on-premise data center and private enterprise WAN, to private clouds built using cloud-scale infrastructure, across multiple public cloud services and to SaaS applications. This is where I spend a lot of my time these days, and it looks nothing like Kansas to me!

The S-DSS working group is developing a security architecture framework that is intent-based and wraps security policies around workloads that are independent of the underlying compute infrastructure, portable across multiple environments and not tied to physical locations. This work is important because security will ultimately be the gating factor for large-scale hybrid and multi-cloud deployment of mission critical applications.

The focus of the HMC working group brings us back to my Dorothy analogy. This team is looking at the full spectrum of business, people, security, regulatory and technology issues that IT organizations must address in order to successfully migrate their applications to hybrid and multi-cloud environments. Most mainstream IT managers are still living happily in Kansas, but the tornado is coming and before too long they will find themselves in the Land of Oz. Hopefully the HMC working group guidelines and recommendations will help them successfully navigate the complex array of issues they will be facing.

I hope you are able to attend ONUG Fall 2017 in October. The conference features many sessions with enterprise trailblazers and thought leaders who are pushing the envelope and operationalizing hybrid and multi-cloud application deployment. There will also be a series of vendor proof-of-concept presentations and demos, as well as “Right Stuff” awards for vendors in the vanguard who are providing monitoring and security solutions that address key operational requirements as specified by the M&A and S-DSS working groups.

One last thing. You won’t see any flying monkeys at the event, but there’s always a chance Glinda, the Good Witch of the South will make an appearance.

 

Deeper visibility into Deepfield

I just watched today’s SDxCentral Nokia Deepfield DemoFriday webinar, featuring Nokia GM and Deepfield architect Dr. Craig Labovitz, who described the product and demonstrated some of its features. Nokia acquired Deepfield earlier this year, and is now disclosing more information about Deepfield and how it fits into Nokia’s IP/optical networks portfolio, which Craig and others described at last month’s IP Networks Reimagined announcement (see my recent blog post).

I’ve been tracking Deepfield since I launched my ACG practice over a year ago, and had been briefed by the company prior to the acquisition, but as Craig acknowledged, Deepfield had been fairly secretive about the product and its technology. So it was good to finally see a demonstration of the actual product and hear Craig describe its capabilities in more detail.

Raising Deepfield’s profile is a good move by Nokia because the company’s global footprint will enable them to sell the product well beyond North America, where Deepfield is deployed by many leading service providers, although the company also has customers in Europe.

The premise for Deepfield is straightforward:

  1. The Internet has become much more complicated in the last 10 years, with complex network topologies, particularly in the metro area, the deployment of CDNs, the explosion of streaming video, and adoption of real-time voice and video communications. The big shift is from the Internet as a set of pipes for best effort bit delivery to a reliable end-to-end transport mechanism for high quality content and services with assured quality and performance.
  2. But what tools are available for service providers to deal with this shift? Deepfield recognized early on that advances in network instrumentation, streaming telemetry and Big Data analytics made it feasible to build a software-only platform for network visibility & analytics that was more powerful and yet more cost-effective than solutions employing DPI probes and monitoring appliances.

I would encourage those who are interested to watch a replay of the webinar, but here are some of the highlights:

  1. Deepfield uses “connectors” to implement southbound interfaces that collect data from a disparate array of sources, including many sources of telemetry data from the network itself, service provider data from OSS/BSS, customer care and billing systems, and data from Deepfield’s “Cloud Genome”, which maintains an up-to-date map of all of the content sources, services and devices on the Internet.
  2. Deepfield supports a petabyte-scale Big Data analytics engine for multi-dimensional, real-time analytics. Craig demonstrated how the system tracks network traffic by content source, website and application type, as well as by network or CDN, and generates intuitive visualizations of traffic load using built-in reports and in response to ad-hoc queries.
  3. Deepfield supports four main use cases: real-time QoE, network engineering, customer care and network security/DDoS. These are implemented as Deepfield applications that leverage a set of northbound interfaces from the core analytics engine. Craig also pointed out that these interfaces are also used to feed actionable intelligence to external systems supporting these various use cases.

It was clear from Craig’s brief demo that Deepfield’s software is a powerful tool for service providers, enabling them to gain in-depth, multi-dimensional, real-time visibility into traffic flowing across their networks and the Internet. Without the ability to gain this level of visibility, network operators would be flying blind and likely have a difficult time monitoring network performance and ensuring digital QoE for content and service delivery.

The webinar was light on implementation details, but Craig did say that the software can run on a cluster of Linux servers in a customer’s data center or can be hosted in the Amazon cloud as a SaaS-based service. Naturally, I’m keen to learn more about the full stack supporting real-time Big Data analytics and how the software is typically deployed operationally by service providers. However, it was good to gain deeper visibility into Deepfield, and I look forward to learning more.

 

 

Nokia couples cloud-scale network visibility & analytics for network automation

I attended Nokia’s IP Networks Reimagined event in June, where the company announced new 7750 SR-s IP core routers based on its new FP4 network processor chip, both impressive technical achievements in their own right.

However, what really got my attention is how Nokia is integrating the technology obtained via the Deepfield acquisition to directly couple cloud-scale network visibility with Big Data analytics for security, performance and network automation.

Deepfield’s petabyte-scale Big Data analytics engine provides visibility into tens of thousands of Internet applications and services as well as billions of IP addresses, mapping what it calls the Cloud Genome. The software is currently used by many leading service providers for DDoS protection and traffic engineering.

Nokia designed the FP4 chip so it can look anywhere inside packets for extracting real-time flow telemetry data. This data, along with machine data and network state provided by Nokia’s SR OS router software, then feeds the Deepfield analytics engine, which derives insights that are used to determine the actions taken by Nokia’s NSP software, which is an SDN-based network automation and management platform.

Using real-time network visibility & analytics for deriving actionable intelligence to drive network automation is the industry’s “holy grail”, and Nokia has articulated its vision for how to achieve this goal, so I’m keen to learn more about how these three pieces fit together.

For more information about Deepfield, be sure to tune into Nokia Deepfield DemoFriday at SDnxCentral this Friday, July 14, where Deepfield architect and Nokia GM Dr. Craig Labovitz will demonstrate the product’s capabilities.

 

NETSCOUT embraces disruption by porting packet flow visibility software to Open Compute platforms

NETSCOUT recently announced new nGenius PFS 5000 network packet brokers based on off-the-shelf Open Compute platforms. While Big Switch Networks blazed this trail back in 2013 with the launch of its Big Monitoring Fabric, NETSCOUT, which sells a family of purpose-built network packet broker platforms, is embracing disruption by porting its packet flow visibility software to white box switches, providing customers with a more cost-effective, easily scalable solution for network-wide visibility.

The benefits of this approach are described in the ACG white paper I authored: “Open Compute Platforms Power Software-Driven Packet Flow Visibility”.

Note that while Big Switch’s BMF is based on the classic SDN architecture using a central controller and the OpenFlow protocol, NETSCOUT has taken a different approach with the PFS 5000, which is based on a fully distributed, mesh architecture that is self-organizing and self-healing.

It will be interesting to watch this market segment evolve as the power of switching platforms based on merchant silicon continues to increase and other network packet broker vendors embrace the disruption of Open Compute.

 

Visibility & analytics at the ONUG Spring 2017 conference

I was invited to speak at the Open Networking User Group’s ONUG Spring 2017 conference held in San Francisco back in April about “A Framework for Infrastructure Visibility, Analytics and Operational Intelligence”. My presentation is up on Slideshare and ONUG has posted a video of the session.

My goal was to stimulate thinking about how we bring the power of Big Data to infrastructure monitoring and analytics by creating a common framework for tools to share visibility data from an array of sources and feed this data into a set of shared analytics engines to support various operational use cases.

It’s not economically feasible, nor is it technically desirable, for each tool to bring its own Big Data analytics stack and ingest dedicated streaming telemetry feeds. As an industry, we need to think about how we can create more commonality at the lower layers of the stack to implement lower cost solutions that facilitate data sharing and common analytics across a wide range of use cases.

On this front, ONUG has a Monitoring & Analytics initiative that is working to define user requirements and develop proof-of-concept demos for a new, comprehensive suite of tools to manage software-defined infrastructure.  There was a panel at the conference that provided an update on the status of the initiative, and ONUG has posted a video of this session.

I also moderated an interesting panel discussion on Retooling for the Software-Defined Enterprise that featured Aryo Kresnadi from FedEx, Ian Flint from Yahoo and Dan Ellis from Kentik, who all have extensive experience using and building monitoring & analytics tools in cloud-scale environments. ONUG has also posted a video of this session, along with many others from the conference on ONUG’s Vimeo channel.

If these topics interest you, be sure to save the date for ONUG Fall 2017, which will be held October 17 & 18 in New York City.

Cloud-scale technologies for cloud-scale infrastructure visibility & analytics

I think we can all agree that cloud-scale technologies are wonderful things, enabling hyper-agile delivery of applications and services to billions of users worldwide. Software-defined networking, virtualization, microservices, containers, open source software and Open Compute platforms are enabling cloud service providers to achieve mind-boggling economies of scale while keeping pace with insatiable user demand.

However, as telecom service providers and large-scale enterprises move to embrace cloud-scale technologies, they are proving to be both a blessing and a curse. The benefits are straightforward: rapidly deliver a broader range of applications and services at lower cost while being able to quickly respond to changing customer needs. The downside is that both service providers and enterprises need to employ new toolsets for developing, deploying and managing these applications and services.

Disaggregation and decomposition are consistent themes for cloud-scale technology. Monolithic platforms are separated into a software-driven control plane running on commodity hardware platforms. Network functions and computing resources are virtualized and decoupled from the underlying hardware. Monolithic applications are decomposed into many microservices that each run in their own container. The business value in terms of lower hardware costs coupled with increased flexibility and agility is real, but there are added costs associated with managing all these different piece parts.

The problem becomes obvious when service providers and enterprises try to apply existing management tools and methodologies to cloud-scale infrastructure. For all their internal complexity, configuring, monitoring and controlling monolithic platforms and applications is simpler than managing multiple layers of many different software components running on virtualized infrastructure. While the industry has recently made great strides by adopting new tools for cloud-scale infrastructure configuration and orchestration, we are still playing catch-up in terms of equally effective approaches to cloud-scale visibility and analytics.

Yet here is where cloud-scale technologies come to their own rescue. By disaggregating and decomposing software and hardware functions, with the proper instrumentation implemented at each layer and in every component, we are able to gain full visibility into the entire stack from top to bottom, while utilizing new technologies like streaming telemetry to provide extremely granular, real-time visibility into the application and service delivery infrastructure.

Therefore, it’s only natural that cloud-scale visibility and analytics should be implemented on native cloud-scale platforms, leveraging the same technologies: software-defined networking, virtualization, microservices, containers, open source software and Open Compute platforms. This is especially critical when employing Big Data analytics, where the basic technologies are inherently cloud-scale, and well-suited for ingesting Big Data streaming telemetry feeds and performing real-time streaming analytics on this data.

 

Real-time network visibility & analytics for operational intelligence

When I launched my network analytics practice for ACG just over a year ago, I decided that my initial research needed to focus on the value of real-time network visibility and Big Data analytics for operational intelligence. SDN, virtualization and the widespread adoption of cloud-scale technologies are enabling new techniques, including streaming telemetry, for instrumenting networks and gaining real-time visibility into traffic flows and network state. At the same time, streaming analytics allows network operators to immediately turn insights into action within seconds or minutes instead of hours or days. Big Data also supports the large-scale data sets needed to apply machine learning techniques for predictive analytics and AI-based network automation.

The ROI for real-time operational intelligence is compelling across a wide array of use cases, including: rapid root cause analysis and reduced mean time to repair (MTTR); immediate detection of security threats inside the network perimeter; real-time performance monitoring for on-the-fly traffic engineering; continuous KPI monitoring for service assurance; and the holy grail: closed-loop feedback for analytics-driven automation. The potential gains are huge and the industry is witnessing a new wave of innovation that will enable us to reinvent how networks are deployed and operated, and how services are delivered and managed.

Network operators are leveraging new real-time visibility and analytics technologies in three separate, but interconnected, domains:

  • Telecom network and communication services
  • Cloud-scale services delivered via the Internet
  • Hyperscale data center infrastructure

Therefore, my research in operational intelligence has separate tracks for covering developments in each domain, although there is overlap between tracks. For example, new telecom services are being delivered via the cloud, and SD-WANs are telecom services that use the Internet to connect users to applications in the cloud. The cloud-scale services track looks at visibility and analytics from the perspective of the network operator who is delivering or consuming services over networks that the operator doesn’t own or operate, whereas the hyperscale data center track looks at the role of visibility and analytics to manage that infrastructure, which is used for delivering cloud-scale services.

As a result, my research spans three separate, but interrelated, markets:

  • Telecom services
  • Cloud-scale services
  • Enterprise IT

While today these are three distinct markets, over the next decade I expect the lines will blur as the industry converges on delivering the majority of applications and services via public and hybrid clouds. Picture one vast market – cloud-scale services – segmented by application type: consumer, enterprise IT, communications, IoT, etc. At this point, the network simply provides access and transport for user devices, machines and sensors to connect with applications running in the cloud.

As an industry, we need to solve many technical problems in order to get there, with security being the most significant challenge, but today’s breakthroughs in real-time network visibility and Big Data analytics will play a key role in realizing this vision.