article

What DevOps Engineers Actually Do: Day 2 of 90 Days of DevOps

8 min read

If you followed along with Day 1, you’ve got the big picture: DevOps is a culture shift that breaks down walls between development and operations. But knowing what DevOps is doesn’t tell you much about what DevOps engineers spend their time doing.

That’s what today is about. We’re going to zoom in on the actual responsibilities — the daily work, the skills, and the mental models that separate someone who “knows about DevOps” from someone who practices it.

Two halves of the same coin

Every application has two sides. There’s the development side — writing features, fixing bugs, running tests, iterating on the codebase. And there’s the operations side — deploying that code to servers, keeping those servers healthy, monitoring performance, and making sure real users can actually reach the thing.

For a long time, these two sides operated in isolation. Developers would write code and throw it over a wall. Operations would catch it, figure out how to run it, and deal with the fallout when something broke at 2 AM. Nobody was happy.

A DevOps engineer sits right in the gap between those two worlds. You’re not writing the application from scratch (usually), and you’re not racking servers in a data center (usually). Instead, you’re building the bridge — the automated systems, pipelines, and infrastructure that move code from a developer’s laptop to a production environment reliably and repeatedly.

It starts with the application

Everything in DevOps revolves around the application. That might sound obvious, but it’s easy to lose sight of when you’re deep in Kubernetes manifests or Terraform modules.

Here’s the thing: you don’t need to be the one writing the application code. But you absolutely need to understand how it works. That means knowing:

Without that understanding, you’re just pushing buttons. You can’t design a good CI/CD pipeline for an application you don’t understand. You can’t troubleshoot a deployment failure if you don’t know what the app expects from its environment.

Where does the application run?

Once code leaves a developer’s machine, it needs a home. That home could be:

Most real-world setups are a mix. Your main API might run in Kubernetes while your frontend sits on a CDN and your background jobs run as serverless functions. A DevOps engineer needs to understand all of these models, even if they specialize in one.

The point is: someone has to set up and maintain whatever environment the application runs in. That someone is often you.

The responsibilities, broken down

Let’s get specific. Here’s what DevOps engineers typically own or contribute to:

CI/CD pipelines

This is the backbone. You design and maintain the automated workflows that take code from a git commit through build, test, and deployment stages. Tools like GitHub Actions, GitLab CI, Jenkins, CircleCI, or ArgoCD handle the execution, but the pipeline logic — what gets tested, in what order, with what gates — that’s your design.

A well-built pipeline catches bugs before they reach production, enforces code quality standards, runs security scans, and deploys with zero (or near-zero) downtime. A poorly built one becomes a bottleneck that everyone hates.

Infrastructure provisioning

You define the servers, networks, databases, load balancers, DNS records, and storage buckets that the application needs — and you do it in code. Terraform, Pulumi, AWS CDK, Crossplane — pick your tool, but the principle is the same: infrastructure should be version-controlled, reviewable, and reproducible.

When a new environment is needed (a staging copy for a feature branch, a disaster recovery region, a performance testing cluster), you shouldn’t be clicking through a cloud console. You should be running a command that brings it all up from a definition file.

Monitoring and incident response

Deploying code is only half the job. You also need to know when something breaks — ideally before users notice. That means setting up:

When an incident happens, you’re often the one leading the response — diagnosing the problem, coordinating fixes, and writing the post-mortem afterward.

Security integration

Security can’t be an afterthought bolted on at the end. DevOps engineers integrate security scanning into CI/CD pipelines (SAST, DAST, dependency vulnerability checks), manage secrets and credentials (HashiCorp Vault, AWS Secrets Manager), enforce network policies, and ensure containers run with minimal privileges.

The industry term “DevSecOps” exists precisely because security is now woven into the DevOps workflow, not treated as a separate gate.

Configuration management

Applications behave differently in development, staging, and production. Managing those differences — environment variables, feature flags, database connection strings, API keys — is a real job. Tools like Ansible, Chef, and Puppet automate server configuration, while platforms like LaunchDarkly handle feature flags.

Getting this wrong means “it works on my machine” stays the default excuse.

You don’t need to be an expert in everything

This list can feel overwhelming. Networking, Linux, cloud platforms, containers, CI/CD, monitoring, security — that’s a lot of ground to cover. But here’s what I’ve learned from working in this space:

You need breadth, not depth in everything. A DevOps engineer is a generalist with deep knowledge in a few areas. You should understand networking well enough to debug DNS issues and configure load balancers, but you don’t need to design BGP routing policies. You should read application code comfortably, but nobody expects you to architect the application from scratch.

Think of it as a T-shaped skill set: broad knowledge across the stack, deep expertise in the areas your team needs most.

Most DevOps engineers come from one of two backgrounds:

Both paths work. The key is staying curious about the other side.

The real job: shipping code safely

If you strip away all the tool names and buzzwords, the core responsibility of a DevOps engineer comes down to one question:

How do we get new features and bug fixes from a developer’s branch into production — continuously, automatically, and without breaking things?

That’s it. Every tool you learn, every pipeline you build, every monitoring dashboard you create serves that goal. The specifics change (new tools appear, old ones fall out of favor), but the question stays the same.

And it’s not a question you answer once. Your pipelines evolve as the application grows. New services get added, traffic patterns change, compliance requirements appear, teams scale up. The infrastructure you built six months ago might need a rethink. That’s normal. That’s the job.

What’s coming next

Tomorrow in Day 3, we’ll start mapping specific tools and technologies to each stage of the DevOps lifecycle. We’ll look at what goes where — from source control to deployment to monitoring — and start building a practical toolkit you can reference throughout the series.

For now, if you take one thing from today: DevOps engineering is about building the systems that let other people ship code with confidence. You’re not the one writing every feature, but you’re the reason those features reach users reliably.


Resources

These videos give you a practical sense of what DevOps engineers do day-to-day, plus structured learning paths if you want to go deeper.