Copy
This week, TNS questions whether concerns over lock-in are warranted in this cloud era. View in browser »
The New Stack Update

ISSUE 230: The Lock-In Monster

Talk Talk Talk

“The question is, what is the risk you’re trying to hedge for by creating a ton of extra cost and losing a lot of capability because you’re not using the native function?”

___
Donnie Berkholz, executive in residence at Scale Venture Partners
Add It Up
Percentage of codebase pulled from open source

Nine out of 10 components in the average application are open source, according to an analysis of 1,700 apps in Sonatype‘s “State of the Software Supply Chain.” In its own report, Synopsys reports 70% of the customer codebases it audited are open source. Those are high-end estimates. A survey of people familiar with application security by ESG provides a lower figure — only 43% believe that more than half of their enterprise’s codebase of open source.

Why the wide variation in numbers? Semantics. A report co-written by Frank Nagle of the Harvard Business School notes that Software Composition Analysis vendors don’t have a common definition of what constitutes a “component.” For example, a package containing many sub-components is considered a separate entity in some data sets. Furthermore, the definition of what constitutes an application is an inherently subjective endeavor.

Only a third of respondents in the ESG study believe that more than 75% of their codebase is protected by application security tools. Is this good or bad? Should we care about the number of lines of code, components, or applications covered?

This is more than an academic debate. Decisions to purchase software are being made based on how much software is at risk. If a product is supposed to identify and resolve issues in dependencies, how should potential buyers benchmark vendor performance? These are the types of questions being addressed by the recently announced Open Source Security Foundation. Stay tuned for more data and analysis about how the growth of open source components is impacting enterprise IT.

What's Happening

A key function that service meshes should increasingly offer is helping DevOps teams have better observability into what events are causing application deployment and management problems. They should also help determine which team can take appropriate action.

In this final episode of The New Stack Makers three-part podcast series featuring Aspen Mesh, Alex Williams, founder and publisher of The New Stack, and TNS correspondent B. Cameron Gain, discuss with invitees how service meshes help DevOps stave off the pain of managing complex cloud native as well as legacy environments and how they can be translated into cost savings. With featured guests Shawn Wormke, vice president and general manager at Aspen Mesh, and Tracy Miranda, director of open source community at CloudBees, they also cover what service meshes can — and cannot — do to help meet business goals and what to expect in the future.

How a Service Mesh Amplifies Business Value

The Lock-In Monster

This week, TNS Correspondent Emily Omier writes about whether concerns over lock-in are warranted in this cloud era. “Lock-in,” as the Martin Fowler website explains “makes switching from one solution to another difficult. Many architects may, therefore, consider it their archenemy.” 

But is it, really?

“Concern about cloud lock-in is probably overblown and likely counterproductive,” Omier writes. She is backed by Donnie Berkholz, executive in residence at Scale Venture Partners, who asserts that industry-wide fear of lock-in is a holdover from the days when enterprises ran their own data centers. Lock-in means the service provider can charge exorbitant rates, knowing the costs to move operations elsewhere would be even more prohibitive. But this is not how things actually work with cloud providers, Berkholz said. 

Corey Quinn, cloud economist at The Duckbill Group, seconded that assessment. “At scale, everything becomes negotiable,” he explained. The extra engineering required to make a company’s workloads truly cloud portable, through cloud native open source, for instance, would cost more than whatever savings that could be had by avoiding lock-in.  

“Very few companies spend more on their AWS bill than they do on payroll,” Quinn said. “People’s time is the most constrained and expensive asset companies have. Do you want them maintaining a database or fixing whatever business problem your company solves?”

Are you afraid of the "Lock-In" Monster? Or nah? Drop us a line and let us know!

Azure Arc Is Developing into a Full Hybrid Infrastructure System

Azure Arc is about extending the Azure control plane to manage resources beyond Azure, like VMs and Kubernetes clusters wherever they are, whether they’re Windows or Linux or any Cloud Native Computing Foundation-certified Kubernetes distro, even if they’re not always connected to the internet.

Kubermatic KubeCarrier Readies a Single Interface for Multiclusters and Multiclouds

New from Kubermatic (formerly Loodse): KubeCarrier, which is software the company claims offers an abstraction layer to Kubernetes Operators, in order to provision applications across different clusters and clouds. It will be a part of Kubermatic’s Kubernetes Platform, designed to help operations automate and manage “Day 2” Kubernetes operations.

Ruckstack: Containerized Package Management for Kubernetes

Kubernetes goes a long way toward standardizing cloud installs across vendors, but it doesn’t help your on-prem installs. Enter Ruckstack, an open source installation and runtime system designed for ISVs and powered by an application-sized Kubernetes system. Think of it as a modern application server.

Party On

Analyst Lawrence Hecht (lower left) shares his cloud native market research in this week's The New Stack Context podcast, along with Alex Williams, Joab Jackson and Richard MacManus (clockwise)

On The Road
AUGUST 17, 2020 // VIRTUAL KubeCon + CloudNativeCon

AUG. 17 // VIRTUAL

KubeCon + CloudNativeCon
Kubernetes is boring and that’s a good thing. It’s what’s on top of Kubernetes that counts. So join us for a short stack with The New Stack as we ask: “What’s on your stack?” We’ll pass the virtual syrup, and talk about all that goes with Kubernetes. It may be stateless, but that also means there’s plenty of room for sides … Register now!
The New Stack Makers podcast is available on:
SoundCloudFireside.fm — Pocket CastsStitcher — Apple PodcastsOvercastSpotifyTuneIn

Technologists building and managing new stack architectures join us for short conversations at conferences out on the tech conference circuit. These are the people defining how applications are developed and managed at scale.
Pre-register to get the new second edition of the Kubernetes ebook!

A lot has changed since we published the original Kubernetes Ecosystem ebook in 2017. Kubernetes has become the de facto standard platform for container orchestration and market adoption is strong. We now see Kubernetes as the operating system for the cloud — evolving into a universal control plane for compute, networking and storage that spans public, private and hybrid clouds. In this ebook you’ll learn:

  • Kubernetes architecture.
  • Options for running Kubernetes across a host of environments.
  • Key open source projects in the Kubernetes ecosystem.
  • Adoption patterns of cloud native infrastructure and tools.
Download Ebook
We are grateful for the support of our ebook sponsors:





Copyright © 2020 The New Stack, All rights reserved.


Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list

Email Marketing Powered by Mailchimp