How to Become DevOps Engineer

Zhang Alex
8 min readFeb 19, 2021

Here we will discuss a DevOps engineer and what are the tasks and responsibilities of a DevOps engineer. First, you need to understand that there are two main parts when creating an application. The Development part where software developers program the application and test it. Operations part where the application is deployed and maintained on a server. DevOps is a link between the two. So, let’s dive into the details to really understand the DevOps tasks and which tools are needed to carry out these tasks.

DevOps Engineer

Hashtag: #devopsengineer #engineer #Developer #devopsengineerresume #Linux #Git #Kubernetes

It all starts with the application. The Developer team will program an application with any technology stack, different programming languages, build tools, etc. They will of course have a code repository to work on the code in a team. One of the most popular ones today is Git. Now you as a DevOps engineer will not be programming the application but you need to understand the concepts of how developers work which Git workflow they’re using, also how the application is configured to talk to other services or databases as well concepts of automated testing and so on. Now that application needs to be deployed on a server. So, that eventually users can access it. That’s why we’re developing it. So, we need some kind of an infrastructure on-premise servers or cloud servers and these servers need to be created and configured to run our application. Again, you as a DevOps engineer may be responsible for preparing the infrastructure to run the application. Since most of the servers where applications are running are Linux servers you need knowledge of Linux. You need to be comfortable using the command line interface because you will be doing most of the stuff on the server using a command-line interface. So, knowing basic Linux commands, installing different tools and software on servers, understanding Linux file system basics of how to administer a server, how to SSH into the server, and so on. You also need to know the basics of networking and security, for example, to configure firewalls, to secure the application but also to open some ports to make the application accessible from the outside. As well as understand how IP addresses ports and DNS works.

However, to draw a line here between IT operations and DevOps you don’t have to have an advanced super operating system or networking and security skills and be able to administer the servers from start to finish. There are professions like network and system administrators, security engineers and so on that really specialize in one of these areas. So your job is to understand the concepts and know all this to the extent that you’re able to prepare the server to run your application but not to completely take over managing the servers and whole infrastructure. Nowadays as containers have become the new standard you will probably be running your application as containers on a server. This means you need to generally understand concepts of virtualization and containers and also be able to manage containerized applications on a server. One of the most popular container technologies today is Docker. So, you definitely need to learn it. So now we have developers who are creating new features and bug fixes on one side and we have infrastructure or servers which are managed and configured to run this application. The question now is how to get these features and bug fixes from the development team to the servers to make them available to the end-users. So, how do we release the new application versions basically? That’s where the main tasks and responsibilities of DevOps come in. With DevOps, the question is not just how we do this in any possible way, but how we do this continuously and in an efficient fast, and automated way. So, first of all, when the feature or bug fix is done, we need to run the tests and package the application as an artifact, like jar file or zip, etc. so that we can deploy it. That’s where build tools and package manager tools come in. Some of the examples are Maven and Gradle for java applications for example NPM for JavaScript applications and so on. So, you need to understand how this process of packaging testing applications works. As I mentioned containers are being adopted by more and more companies as a new standard, so you will probably be building Docker images from your application. As a next step, this image must be saved somewhere right in an image repository. So, the Docker artifact repository on Nexus or Docker Hub, etc. will be used here. So, you need to understand how to create and manage artifact repositories as well.

Of course, you don’t want to do any of this manually instead you want one pipeline that does all of these in sequential steps. So, you need to build automation and one of the most popular build automation tools is Jenkins. Of course, you need to connect this pipeline with the Git repository to get the code. So this is part of the continuous integration process where code changes from the code repository get continuously tested. You want to deploy that new feature or bug fix to the server after it’s tested, built, and packaged which is part of the continuous deployment process. Where code changes get deployed continuously on a deployment server and there could be some additional steps in this pipeline like sending a notification to the team about the pipeline state or handling failed deployment etc. But this flow represents the core of the CICD pipeline and the CICD pipeline happens to be at the heart of the DevOps tasks and responsibilities. So as a DevOps engineer you should be able to configure the complete CICD pipeline for your application. That pipeline should be continuous. That’s why the unofficial logo of DevOps is an infinite cycle because the application improvement is infinite. New features and bug fixes get added all the time that need to be deployed. Now let’s go back to the infrastructure where our application is running. Nowadays many companies are using virtual infrastructure on the cloud instead of creating and managing their own physical infrastructure. These are infrastructure as a service platform like AWS, Google Cloud, Azure, Linux, etc. One obvious reason for that is to save costs of setting up your own infrastructure. But these platforms also manage a lot of stuff for you, making it much easier to manage your infrastructure there. So, for example, using a UI you can create your network, configure firewalls route tables, and all parts of your infrastructure through services and features that these platforms provide. However, many of these features and services are platform-specific. So, you need to learn them to manage infrastructure there so if your applications will run on AWS you need to learn the AWS and its services. Now AWS is pretty complex but again you don’t have to learn all the services that it offers you just need to know those concepts and services that you need to deploy and run your specific application on the AWS infrastructure. Now our application will run as a container because we’re building docker images. And containers need to be managed for smaller applications. Docker-compose or docker swarm is enough to manage them but if you have a lot more containers. Like in the case of big microservices you need a more powerful container orchestration tool to do the job most popular of which is Kubernetes. So, you need to understand how Kubernetes works and be able to administer and manage the cluster as well as deploy applications in it. Now when you have all these maybe thousands of containers running in Kubernetes on hundreds of servers. How do you track the performance of your individual applications or whether everything runs successfully? Whether your infrastructure has a problem and what’s more important how do you know in real-time if your users are experiencing any problems. One of your responsibilities as a DevOps engineer may be to set up monitoring for your running application. The underlying Kubernetes cluster and the servers on which the cluster is running, so you need to know a monitoring tool like Prometheus or Nagios, etc. Now let’s say this is our production environment. Well in your project you will of course need development and testing or staging environments as well to properly test your application before deploying it to production. So, you need that same deployment environment multiple times. Creating and maintaining that infrastructure for one environment already takes a lot of time and is very error-prone so we don’t want to do it manually three times. As I said before we want to automate as much as possible. So, we automate this process by creating the infrastructure as well as configuring it to run your application. Then deploy your application on that configured infrastructure which can be done using a combination of two types of infrastructure as code tools. Infrastructure provisioning tools like Terraform for example and configuration management tools like ANSIBLE, CHEF, puppet, etc. So, you as a DevOps engineer should know one of these tools to make your own work more efficient as well as make your environments more transparent. So you know exactly in which state it is easy to replicate and easy to recover. In addition, since you are closely working with developers and system administrators to also automate some of the tasks for them you would most probably need to write scripts. Maybe small applications to automate tasks like doing backup system monitoring tasks, Cron Jobs, network management, and so on.

In order to be able to do that you need to know a scripting language. This could be an operating system-specific scripting language like Bash or PowerShell or what’s even more demanded a more powerful and flexible language like Python, Ruby, or Go link. Which are also operating system independent, again here you just need to learn one of these languages. And Python without a doubt is the most popular and demanded one in today’s DevOps space. Easy to learn easy to read and very flexible Python has libraries for most of the database operating system tasks as well as for different cloud platforms. Now with these automation tools and languages, you write all of these automation logics as code. Like creating, managing, configuring infrastructure that’s why the name infrastructure is code. Now you manage your code just like the application code you manage. Also using version control like Git. So, as a DevOps engineer, you also need to learn Git so at this point you may be thinking about how many of these tools do I need to learn. Do I need to learn multiple tools in each category, also which ones should I learn? Because there are so many of them. You should learn one tool in each category. One that’s the most popular and most widely used because once you understand the concepts well building on that knowledge and using an alternative tool will be much easier.

--

--

Zhang Alex

Article writter. Works as IT support Executive in AlT Network and Technology Ltd.