Counting bytes and measuring latencies

Why continuous delivery is feasible now?

 I had joined thoughtworks as a sysad and only recently, i have started working as consultant for our client projects. Now, as i work with different offshore development teams across projects i actually get to experience  XP practices. Few days back i was discussing with chai (a veteran thoughtworker) on agile software development and its history. I was aware of most of the tenets of agile software development and its implementations like XP , scrum etc; But chai gave me a different perspective. He was explaining me about the timing when agile software development was triggered. Though the pain it addresses was not new, still the the movement or change took place at certain time frame. His explanation was around 4 points ,  

  1. Moore's law (cheap hardware helped us focus on maintainable code than performant one)
  2. Why learnings from Large civil engineering prjects are not directly applicable to software projects (Failure in detailed planning or architecture of large softwares,  as software is not governed by law of physics, which are well defined)
  3. OOPSLA 
  4. A graph on `Cost of change` vs time i.e. what will it cost to introduce a code change vs as time progress in a software project. It grows exponentially.

This discussion made me think why we are talking on continous delivery so much now? Time to retrospect why 2 years early we were not making so much buzz around this?

 

I have worked on various CI tools, and observed a variety of techniques and strategy that can used to ensure the code quality at any given time. Since a long time it was known that the final bottleneck remained in deployments, and this becomes more evident as the sotware becomes more enterprise class and develops more integration points. But only recently we have been able to extend the CI till deployment. Why ?

  1. Maturity in infrastructure management tools: Be it monitoring solutions or configration management system, all infrastructure managemenet tool chains have matured significantly to capture the infrastructure context. And to easily integrate with each other. 
  2. Rise of cloud. With the rise of cloud even server provisioning can be trigger programmatically. This helped us join the last dot, i.e. scaling up or down on demand. Having an elastic infrastructure.
  3. Infrastructure as a code: A matured infrastructure toolchain along with a cloud lets you express your infrastructure as code. Hence you can use standard CI or other tooling to test it in a sandbox environment, just like your application code. These also gives you the ability to recreate your infrastructure at will.
  4. The devops movement:  If you can code, that does not mean you can develop a software solution,  you need to understand the functional domain. Similarly you cant really exploit all the infrastructure tooling unless you understand the infrastructure bit itself. DevOps is a movement that encourages breaking the silos of operation and development team, to foster cross collaboration for better software development. There are a lot of debate happening around whether it is a culture or whether it should be used a job title or not etc etc, but at least to me its a movement that helped me connect to like minded people, relevant web contents and  awesome tools.  

As far as i can think of, these are few critical points that makes CD or continuous delivery feasible now. Obvisously these observations are totally based on my experiences, and there are cases where complementary factors other than this,  helped organizations enable CD.