The “3 D’s” of hybrid and multi-cloud application deployment

While describing the challenges of enterprise IT application development in his FutureStack keynote, New Relic CEO Lew Cirne addressed the key question: “How to go fast at scale?” He pointed out that it’s not uncommon for DevOps shops to perform HUNDREDS of application deploys per DAY while larger outfits even deploy 1000’s. Listening to Lew describe how New Relic’s customers are rapidly developing and deploying cloud-based applications, it really hit me again that “Toto, we’re not in Kansas anymore”.

This got me thinking about the “3 D’s” of cloud application deployment:

  1. Dynamic
  2. Distributed
  3. Diverse

Let’s explore each of these and the challenges they are creating for DevOps, ITOps, SecOps and NetOps teams charged with deploying, securing, monitoring and managing hybrid and multi-cloud applications along with the underlying application and network infrastructure.

Dynamic. The basic premise of DevOps is that small, highly focused teams are working separately, but in parallel, continuously developing and deploying independent parts that make up a greater whole. This process itself is dynamic by its very nature, with some teams doing 100’s of deploys per day. More importantly, application run-time environments are becoming increasingly dynamic. In a Docker environment, new containers can be spun up and down in seconds, driven by the ebb and flow of application demands. In a microservices architecture, in which applications are composed of small, modular services, the various interactions between the microservices themselves will be inherently dynamic and unpredictable as new application capabilities are created by different combinations of the supporting microservices.

Distributed. Hybrid and multi-cloud environments are highly distributed, with applications and data possibly residing on-premise in legacy three tier data centers, on-premise in private clouds built using cloud-scale architectures, or in one or more public clouds utilizing SaaS, PaaS, IaaS capabilities and serverless computing. In addition, the underlying cloud compute and application infrastructures are highly distributed in order to ensure high availability and be able to easily scale compute and storage capacity on-demand. The interactions between application components distributed across these different environments can be very complex, both within in a given data center and over the network between data centers. We truly live in an age when “the network is the computer”.

Diverse. Application development is highly diverse, with enterprise IT developers using many different programming languages and run-time environments, including bare metal servers, virtual machines and containers. There are also multiple software frameworks that are used to implement these different environments, and developers may mix and match various components to create their own custom stacks.  Each cloud service provider offers its own set of application services, supported by its own full stack and characterized by a comprehensive set of APIs. There are also many different ways data can be stored and queried, ranging from legacy RDBMS systems to the latest NoSQL Big Data repositories.

Combined, these “3 D’s” are creating serious challenges for enterprise operations teams and have put a premium on monitoring and analytics solutions for gaining real-time visibility into what is happening at the application, infrastructure and network layers, as well as how to correlate anomalies and events at one layer with observed behavior and conditions at another. I think it’s safe to say “we’re not in Kansas anymore”!

Returning to FutureStack, Lew closed his keynote by describing the challenge of “interconnectivity” in “3 D” environments and the use of instrumentation for “transaction tracing” in order to map out the flow of service execution to identify problematic services that may be negatively impacting overall performance. Lew noted that in this area, New Relic is leveraging open source software – OpenTracing – which is a Cloud Native Computing Foundation member project.

The interconnectivity problem is yet another reason why the solutions that New Relic and other APM vendors are developing are so critical. If DevOps and ITOps teams don’t have the tools they need to monitor and manage large-scale deployments of highly dynamic and distributed applications across heterogeneous environments, enterprise IT won’t be able to “go fast at scale”. The result will be higher operating expenses, lost business opportunities and a serious drag on digital transformation initiatives.

 

 

FutureStack – New Relic’s customer conference

I recently attended New Relic’s FutureStack customer conference in New York City, which was a well organized event with great content delivered by subject matter experts, including many New Relic customers. It was my first engagement with the New Relic team and a good opportunity to take an in-depth look at the world of visibility and analytics top-down from the perspective of application performance monitoring (APM).

New Relic is a fast growing leader in the APM market, with revenue of $263.5 million in fiscal 2017, up 45% from fiscal 2016. More than 16,000 customers worldwide use New Relic’s SaaS-based product suite, including 40% of the Fortune 100. Company founder and CEO Lew Cirne was a pioneer in the modern APM market, founding Wily Technology almost 20 years ago. It was refreshing to hear that Lew is still a developer at heart and takes regular week long sabbaticals to work on ideas for new products.

New Relic offers a complementary set of products that serve as a “Digital Intelligence Platform” across three inter-related domains: digital user experience, application performance and infrastructure monitoring. The company’s core technology and expertise is embodied in its APM product line, which is used to instrument applications written in the leading programming languages and running across a wide range of execution environments. In his keynote, Lew emphasized that New Relic’s approach is to “instrument everything” so that DevOps teams always have full visibility into the behavior and performance of all applications. He noted that the old rule was nothing goes into production without a full QA cycle, but the new rule is no application should be deployed without complete instrumentation.

New Relic also provides several products for monitoring user experience by instrumenting mobile applications and browsers, including synthetic monitoring solutions that can proactively detect problems before users are impacted. Last year, the company moved into infrastructure monitoring that extends beyond basic server/OS monitoring to integrate a wide range of cloud-native application services provided by AWS and Microsoft Azure. Together, the full suite of New Relic products enables development and IT operations teams to see a complete picture of application behavior and performance from the end point to the execution environment and the underlying service infrastructure.

How does New Relic make sense of all the metrics and event data that are extracted using this ubiquitous instrumentation? “Applied intelligence” is the other side of the “instrument everything” coin, and this is where New Relic is doing impressive work with Big Data and real-time analytics. The company operates its own cloud infrastructure to deliver SaaS-based services to its customers. In order to be able to ingest, process and store the massive amount of metric and event data collected from customer applications, New Relic built its own high performance, multi-tenant, Big Data database from the ground up. The system currently processes on average 1.5 BILLION metrics and events per MINUTE. That’s a whole lot of data and speaks to why I believe SaaS-based analytics is the preferred approach for the vast majority of Big Data monitoring solutions, for several reasons.

First, SaaS solutions have significantly lower up front costs and can be deployed rapidly. Second, the elastic nature of the cloud allows the customer to rapidly scale monitoring, on-demand. Third, Big Data technology is a moving target and a SaaS solution shields the customer from having to deal with software updates and hardware upgrades, in addition to possible technology obsolescence. Last, and perhaps most importantly, since applications are migrating to the cloud, monitoring and analytics should follow. Given the option of a cloud-based Big Data monitoring solution, I can’t think of a good reason why mainstream enterprise IT organizations would choose to deploy on-premise.

Visibility into applied intelligence is provided by New Relic’s Insights product for visualizing application insights, including user-customizable dashboards that were showcased by customers in the main tent session that concluded the conference. Under the hood, New Relic has employed advanced statistical analysis and other techniques for correlating data extracted from user experience, application and infrastructure monitoring.

One example is RADAR, a new Insights feature that was introduced at FutureStack. RADAR “looks ahead and behind” to automatically glean useful intelligence for “situational awareness” that might not be readily apparent to customers looking at the usual dashboards. The analytics software acts like an intelligent assistant, constantly searching for anomalies and conditions that the customer might overlook or not discover until it’s too late. Not necessarily AI in the strictest use of the term, but certainly just as helpful.

FutureStack was also a great forum for learning how many leading enterprise IT organizations are embracing DevOps for application deployments spanning hybrid and multi-cloud environments, but I’ll wrap up my thoughts on FutureStack in my next post with a closer look at this far-reaching trend and its market impact.