Series:
(Disclaimer: The
manuscript is my personal view and is not affiliated to any groups or
organizations)
In Phase II DevOps should start to move on-prem
infrastructure and tools like source code, CMDB, and CI to Cloud. Don’t get overwhelm
with a plethora of options to refine the build and release process in this
phase. Moving the repositories and tooling is itself a big task and planning to
do so without interruption of development and other activities is quite a
challenge. However, commence contemplating about future methodologies of deployment.
Lay ground work for the team to use a tool such as Visual
Studio Online (VSO) to collaborate, manage project, manage source code, mange
team structure, manage build, and manage load test. VSO an ALM tool it can help
consolidate project management, code management, test cases, etc. which might
currently spread across various tools. Visual Studio Online provides two types
of source repositories viz. Git and TFS (Team foundation Server) on the Cloud.
So if you are already using one of these repositories. Setup project and move
the repositories to Cloud. Configure
Eclipse and build tools to use the new Git repo. Don’t get overwhelm and start
consolidating disparate system in this phase, just start the study, let the
team get familiar with the tools. Let the team experiment on their own.
Prepare to embrace for the PaaS deployment infrastructure
services. Improve the deployment process and start writing Infrastructure
as Code. Use Cloud provider supported scripting platform, sdks, etc. e.g.
PowerShell or node.js or one of the sdks in Azure (or boto or one of SDKs in
AWS). Try to create a template for the initial setup either by scripting or by
using Cloud provider framework such as Resource Manager, a JSON support format,
in Azure (or CloudFormation in AWS.)
Move CMDB to cloud. Setup you own instances of Chef
or Puppet. Move CI tools to Cloud. Setup you own instances of Jenkins or
Hudson. If your build has specialized need then utilize custom script
for VM customization, VM Extension, in Azure (User data in AWS EC2). As an
illustration, if your build pushes a tar file to Cloud provide storage; Blob in
Azure (or S3 in AWS), you could leverage a custom script which could pull the
build during VM start up and bring it up to a desired state configuration. Check
out for alternatives, as an illustration, instead of the custom script you
could have Jenkin facilitates in getting the tar file to the VM.
Add a new set of monitoring for the CMDB, CI, and Source
Control systems in cloud. This will enable to start collecting metrics. In
the later phases, you could analyze the logs using PaaS services such as
HDIsnight etc. for capacity planning. This step could significantly reduce
unplanned outages due to lack of capacity planning. If you are already using
capacity planning tools such as Ganglia or Cacti then move them to Cloud and
monitor the system too.
Don’t deviate from current processes and tools in usage in
this phase. For instance don’t try to institutionalize any containerization
technology such as Docker, Rocket, Kubernetes, etc. if you are not using
currently.
The below diagram is a visual representation of various
technology in Azure for DevOps (AWS’s CloudFormation, BeanStalk, boto, etc. for
AWS):
No comments:
Post a Comment