iZeno:

The DevOps movement started in 2009 with a talk delivered by John Allspaw and Paul Hammond at the O’Reilly Velocity 2009 conference. The DevOps term was coined a few months later by Patrick Debois, who organized the first DevOpsDays later the same year.

The goal of DevOps is simple. It aims to optimize the entire delivery pipeline, across all functional areas and, initially, its traction was limited to startups mainly delivering web applications. For several years DevOps remained confined to the realm of small innovative organizations that didn’t have to cope with the burden of maintaining complex legacy systems. However, the large enterprises were monitoring the progress that startups achieved using DevOps and the real growth of DevOps into enterprise space ignited in around 2012 when IBM started to use SmartCloud Continuous Delivery.

DevOps Adoption in the Enterprise

Delivering small batches is one of the core concepts of DevOps. It has some obvious advantage of minimizing the impact of any change and also of reducing the feedback cycle. Also, all the steps of delivering the application to the final users are simpler: testing, security validation, deployment.

In theory, it’s all easy, all you have to do is to just break it in small batches.

Of course, practice is different.

Most enterprise applications are monolithic and delivering using small batches is not always a good option. Usually, an enterprise application is composed of at least three deployable assets:

– Data Layer, which includes all database servers, file shares, etc.

– Application Layer, which controls the business logic and provides the application’s functionalities.

– Presentation Layer, which displays the information to the user usually using a web page, mobile or desktop app, API for 3rd party integration.

As a result, from a deployment perspective, there are at least three large assets to deploy and for a large application, there is usually a dedicated team working on each of them. Therefore, even for a small update to the app, the changes on all these three components need to be integrated into a large deployable component.

Moreover, if any of the components need to be scaled, you can only do it horizontally by deploying it on multiple servers. For example, let’s assume that only a small part of the application needs to be scaled, like the 3rd party integration API. The only way you can do it is by deploying multiple instances of the component to which it belongs, in our case the presentation layer, to as many servers as needed to cope with the expected load. It is clear that the smallest deployable asset is one of the three deployable assets mentioned above.

Where Microservices Come to Help

It’s maybe hard to believe it, but many companies like Amazon, Facebook and Google started with monolithic applications that were complex to understand and improve. As a result of their efforts to better serve their clients and achieve their business goals, a new way of software design emerged in the last couple of years, called microservices.

According to Martin Fowler, “the microservice architectural style is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API. These services are built around business capabilities and independently deployable by fully automated deployment machinery. There is a bare minimum of centralized management of these services, which may be written in different programming languages and use different data storage technologies.

Each microservice has its own specific job and it’s not concerned with the roles of other components. Decoupled services are easier to recompose to serve different purposes and also bring important performance advantages since the critical ones can easily be separated from the rest of the application and scaled accordingly.

Today microservices and DevOps are gaining more and more traction. According to Forrester research, by the end of 2017, 50% of organizations are implementing DevOps. Forrester even declared 2018 to be the Year of Enterprise DevOps, meaning that DevOps is going to be transitioned across the entire enterprise.

New Challenges

Before DevOps, there have been many prior attempts to optimize the application delivery and IT operations like Dev-test, ITIL, Six Sigma and so on. All these previous movements didn’t enjoy much success because they focused on optimizing a limited number of functional areas of the application delivery pipeline (one or maximum two). DevOps is currently succeeding because it aims to optimize entire delivery pipeline, across all functional areas.

So if you are involved in software development chances are you’re using DevOps to build a new product relying on a microservice architecture. This is a sensible choice and you are going to benefit from the important advantages that this approach brings:

– significantly speeding things up, with development to production period moving from months do days or even hours

– provide a better end-user experience, by offering faster software release cycles and time-to-fix for any bugs/issues in the application.

But it is likely that you will discover some new problems that you did not anticipate at design time. One of the most common concerns when introducing DevOps comes from the security space. Application delivery teams wish to deliver new releases as soon as possible. Security teams are meant to ensure that the resulting systems are secure. Even if at the first glance the scopes of the two teams seem to be at odds, if we look closer we see that their business goals are the same (delivery of high-quality products, short release times). DevOps introduces continuous delivery of small batches and it is imperative that the security team collaborates with the development team to secure the delivery pipelines.

Also, the teams must pay attention to the tools that support the continuous collaboration required by DevOps. For example, traditional access and identity management solutions, used for decades in the enterprise, might not be the best option when trying to cope with the security challenges introduced by DevOps.

The high level of automation required by DevOps depends upon the presence of many hidden secrets (passwords, API keys, certificates and so on). Moreover, because microservices are distributed in many containers, the potential attack surface is exponentially multiplied. This is a huge security threat that the aforementioned security solutions were not designed to cope with. Also, similar security risks lie in the configuration files that are used by the many tools used in the DevOps cycle such as Ansible and Jenkins. To avoid this trap, it is important to make sure that there are not hardcoded passwords, API keys or certificates in your source code before uploading submitting to GitHub. An even better approach is to use one of the the dedicated tools that are available today.

Another common security threat induced by the high level of automation required by DevOps is the insider attack. Maybe you are familiar with the definition of insider attack as it was generally accepted for waterfall projects, where an insider could be somebody from the team, determined to use his/hers credentials and knowledge about the system to gain access to important data. With the rise of DevOps and with its heavy use of automation there is a new dimension that you should be aware of. Since automation takes the place of individual practitioners there is a risk that the programmers of the automation tools could embed malware into these tools to gain access to the system’s sensitive data. In order to mitigate this risk, you should employ multiple checks on the automation code before you start using it. A useful measure is to deploy it first on a separated software defined infrastructure where you can test its behaviour using specialized applications.

Conclusion

As seen, DevOps was slow to start in the enterprise environment but once this movement gained traction, more and more companies realized its benefits and pivoted to DevOps. However, there are inherent challenges that you must be aware of, like establishing continuous security validation into your pipeline. As we mentioned in this articles, there are easy to follow approaches to make your business stay ahead of the competition by improving your application security, with no negative impact on deployment frequency.

 

Also Check:

Decentralized Computing: The Building Blocks for a Digital Enterprise

The Future of Casinos

How to Build Relationships With Mobile-First Customers

The 4th Industrial Revolution Will be Televised

iZeno is On-board with TIBCO and Singapore Polytechnic to Overcome the Manufacturing Skill Gap Required for Industry 4.0

The State of Microservices