We’ve written a lot lately about progressive delivery and how it can help organizations deploy more quickly to get feedback on changes before releasing them widely. Progressive delivery uses experimentation techniques such as feature flags, blue-green rollouts, and canary releases to show new features or bug fixes to a small cohort of users. It takes feedback from those experiments to decide to go big with it or roll it back to its original state for more work. These experiments enable organizations to decouple deployment from release.
In a recent conversation with Dave Kapow, evangelist at feature flag platform provider Split Software, he discussed layered progressive delivery. This approach, he explained, begins with finding consensus with developers and SREs. “There’s nobody that’s not going to want better cycle time, shorter cycles. There’s nobody that’s not going to want to automate the ability to detect when things go awry that you didn’t expect,” he said. “There’s probably — hopefully — not too many people that aren’t going to want to know whether the thing they just did had an effect.”
He said this new approach to progressive delivery builds layer upon layer of richness to get more out of the experiments. He vehemently debunked the notion that experimentation is both hardcode rigorous and requires creating two versions of the code.
Savvy experimenters, Karow said, do dynamic config, which he explained allows development teams to send data along with a flag that sets different parameters for different users. He said the parameters of a recommendation engine, for example, “could dictate, do I want to give David many answers, or just a handful? And if you decide whether to expose people to this new thing, you could also create two or three cohorts that each have different parameters. Now you’ve got people on your legacy engine, and you’ve got two or three cohorts in the new one, and you’re trying different things — like lots of answers, not very many answers, ranked by popularity versus ranked by relevance.” He made the critical point that you can change the value in the flags and those parameters without creating new code versions.
“So now David is in cohort three that gets this, but we’ve just changed that he’s going to see results ranked by popularity instead of relevance in the engine. And we will run that for a week and see what happens. That’s not three copies of code.”
When Karow talks about a layered approach, it simply describes a way to implement progressive delivery in progressively more value-rich ways, starting with the least threatening one and not a point of debate with a developer.
A hidden benefit of using a feature flag platform to deliver the variations is that it also captures telemetry from each cohort separately and processes the data separatelyto compare how each cohort behaves quicklys.
KAutomating guardrails, such as constantly monitoring for rate, can provide insights you might not have expected. arow gave an example from LinkedIn, which he said has been experimenting for a long time. They had an experiment on which version of an application would cause people to do more job listings. The developers didn’t monitor the application for speed but got an alert from the platform that said the changes made the application slower. AWhen the thing that’s rolling it out also is the thing that’s keeping track of how it’s going, it becomes straightforward to know what’s happening,” he said.
The next layer is measuring release impact. “If you achieve shorter lead times, and you’re shipping a lot, you might be like a hamster on a wheel like you’re in a feature factory, and it sucks,” Karow said. “It’s demotivating. But if you have direct evidence of your efforts, it leads to pride of ownership.”
The top layer is tested to learn, an area Karow said can help organizations take more significant risks safely. He gave the example of a food delivery service that wanted to ask customers questions about their eating and shopping habits to fine-tune their service but didn’t want to ask too many questions for fear of turning off their users. So, he said, they did a status quo, a modest release, and a “go for it” out — which also increased onboarding time by two or three minutes. And right away, he said, they saw more money from every customer.
So instead of the usual pre-release hand-wringing — Do it. Could you not do it? We’ll lose everything. We’ll miss our quarter. — they tried these changes out safely, giving them complex data from genuine customers.