Summary
Overview
Work History
Education
Skills
Additional Information
Timeline
Generic

Raghu Vamsi Chimata

Reston,VA

Summary

Around 7+ years of IT experience in DevOps, Configuration Management, Release/Build Management, in OpenStack and AWS Cloud, Azure. Experience in using version control tools like Subversion (SVN), GIT, Bitbucket. Gained experience working with build tools such as Maven and ANT. Excellent hands-on experience in OpenStack environment implementation. Have worked on AWS EC2, RDS, S3 and other CloudWatch related metrics and monitoring. Used GitHub Actions and Harness for CI/CD in AKS and On-Prem environments. Experience in using SQL Server Tools like Import/Export Wizard and SQL Query Analyzer. Extensive experience on Docker containers infrastructure and Continuous Integration for building & deploying Docker containers. Container management services like Docker and Dockerhub Strong understanding of infrastructure automation tooling (terraform, cloud formation templates). Experienced in working with Ansible. Experience in developing system automation tools in Python. Expertise in Querying RDBMS such as Oracle, SQL Server using SQL. Experience in using and maintaining tracking systems like JIRA, Kanban, Azure DevOps boards, Confluence. Configured the Jira Boards for Releases, Scrum, Sprint, Backlogs and for Support teams within the organization.

Overview

10
10
years of professional experience

Work History

DevOps Engineer

GEICO
04.2023 - 10.2023
  • The project is completely based on Dotnet based web and mobile applications for the Mortgage entity
  • Worked on creating build and Release pipelines in Azure DevOps and configured variable groups for different stages of the pipelines
  • Worked on converting the manual pipelines into YAML pipelines in Azure DevOps
  • Provisioned multiple resources for the application such as Azure function apps, Azure deployment slots, service Bus instances
  • Created multistage pipelines with unique autonomous abilities for the Devs and DevOps based on requirements
  • Responsible for the releases and took care of the release documents and stories and CAB approvals for each release
  • Created Canary deployments also known as blue-green deployments with deployment slots in the azure services
  • Used Terraform to create, Destroy, and make any updates to the resources
  • Worked with CNSS and configured certificates, renewed them, and updated the DNS configurations for the Azure web applications
  • Documented all the processes, issues and templates used in this project
  • Supported the Prod releases as well as production issues and helped triage the root causes for the issues with the help of Application insights and logs
  • Setup and maintained logging and monitoring subsystems using tools like Elasticsearch, kibana, Prometheus, Grafana and Alertmanager
  • Established infrastructure and service monitoring using Prometheus and Grafana
  • Supported weekend, Overnight and other off business hour releases scheduled for every sprint
  • Supported the application with on-call rotation and 24/7 Prod support after the application has been live in Prod
  • Created alerts and monitoring dashboards using Prometheus and Grafana for all the applications deployed
  • Helped the team with data and pipelines migration when the applications are moved
  • Provisioned AKS clusters, maintained pods, nodes
  • Used kubectl commands for kube related works, debugging
  • Configured Github repositories for source code management, maintained all repositories and updated config settings for repos and branches as required by the team
  • Provisioned and supported Cosmos Database, collections, and instances for the CosmosDB
  • Used JIRA for creating and tracking issues and created confluence Spaces and documented all the project material
  • Environment: Linux, Ansible, Azure DevOps, Github, Terraform, Kubernetes, Jira, Confluence, Elasticsearch, Grafana, Prometheus, Alertmanager, Dotnet6, Shell, Cosmos DB, Docker, AKS.

DevOps Engineer

Investors Bank
09.2021 - 04.2023
  • Worked on creating and maintaining the resources in AWS for the applications hosted on the cloud
  • Worked on the route tables, VPC configurations and other network rules for multiple EC2 instances and applications
  • Provisioned VPCs, EC2 instances and Security Groups using Terraform scripts
  • Initially installed Jenkins and tested AWS code commit and other features
  • As the AWS administrator responsible for maintaining applications and resources across 8 AWS accounts, we have at the Investors Bank
  • Worked on the Security violation remediations based on the report from security scanning
  • Applied necessary patches when needed and tweaked the configurations wherever needed
  • Worked with the Data Engineer team and helped them with data migration between AWS accounts
  • Created S3 buckets and attached required bucket policies and encrypted the objects with custom keys where I have explicitly given the necessary role the needed read, copy and list permissions and key policies to encrypt and decrypt
  • Onboarded tools like Jenkins, Terraform, Jira and Github for this project and took care of installations and configurations
  • Integrated Github with Jenkins and Jira Boards using plugins and webhooks
  • Created workflows in Github and Jira, updated necessary fields in the Jira story settings
  • Worked on Encrypting the unencrypted existing EBS volumes by following all the necessary steps
  • Worked on installing the necessary external software as per the application requirements
  • Created the AWS Application Load balancers and Autoscaling groups for the applications
  • Maintained and migrated the SQL server Database backups to the newly created AWS accounts and responsible for cleaning up old and unused resources for effective cost management
  • Created option groups and backup policies for the SQL Server and MYSQL databases
  • Helped the ETL processes by providing necessary support with databases, S3 buckets, roles, and permissions throughout the process
  • Used Terraform to create, Destroy, and make any updates to the resources
  • Worked on Cloud Formation stacks for resource management
  • Experienced in resolving internal AWS issues by creating support tickets to AWS and working with them until the issues are resolved
  • Worked on IAM related stories for user and role creations, updates to the policies and attaching them to proper roles
  • Configured Confluence and and created first spaces for the organization and helped multiple teams to onboard onto confluence
  • Created many pages and maintained folders for my team with structured documentation in confluence
  • Implemented MFA on all the AWS accounts and tested them successfully as per security standards
  • Environment: Linux, Ansible, AWS, Jenkins, Github, Terraform, Jira, EC2, S3, VPCs, ETL, MYSQL, CloudFormation, IAM, Confluence.

Azure DevOps Engineer

Kroger Technology
03.2021 - 09.2021
  • The main goal of the project is to create, maintain and enhance the customer notification services and the team’s name is Customer Dialogue
  • Worked in a team of 12 which consisted of Product Owner, Project manager, Developers, Team Lead and a Scrum master with me as the only DevOps Engineer
  • The main goal is to migrate everything from On Prem PCF to Azure Cloud using different tools
  • We used Gitlab to host our code and I migrated all the code, repos and branches along with commit history and pipelines to GitHub SaaS
  • Used the Clobber tool to migrate from GitLab to GitHub
  • Created multiple workflow actions for each project for feature branch, release and Dev activities separately and thus helped the branching strategy for Developers
  • Worked on leading the migration from PCF to Azure and we have chosen a two-step path
  • We have migrated our applications from PCF to AKS and On-Prem Kubernetes
  • Used Harness as the CI and CD tool to create the pipelines and integrated it with GitHub to create automated pipelines
  • Created GitHub actions using GitHub workflows to trigger the pipelines in Harness
  • Used Ansible to create Workflows in GitHub which will do Build and deploy actions
  • Packaged our code and converted to Docker image and pushed images to Artifactory and later pulled it from there to Deploy to my clusters in my AKS environment
  • Used Helm as manifest to package and maintain the applications on AKS and for multiple purposes like Autoscaling, Service monitoring and deploying
  • Used Gradle for build and SonarQube for code coverage analysis
  • Provisioned new clusters using Terraform and orchestrated them in Rancher
  • Initially used Spinnaker for CI/CD and then moved to Harness
  • Used Ansible for automation in the Cloud
  • Utilized Kubernetes for the runtime environment of the CI/CD system to build, test deploy
  • Participated in all production deployments whenever it’s my turn
  • Documented all the procedures and scripts created for this project in Confluence
  • Used JIRA for creating and tracking issues
  • Environment: Linux , Ansible, Helm, Github, PCF, Terraform, Kubernetes, Java, Shell, Gradle, SonarQube, Cosmos DB, Harness, Spinnaker, Jira, Confluence, Rancher, Docker, AKS

AWS DevOps Engineer

Fannie Mae
03.2020 - 03.2021
  • Part of a huge enterprise team which supports Release, deployments, Automations, and all other DevOps activities for around 40 application teams in Fannie Mae
  • Worked on automation for cloud activities in AWS like CloudWatch Alarms, EMR create/Terminate verifying S3 deployments and files after deployment, Lambda code verify, and many more automations are done using Python
  • Managed and controlled the source code repository, currently housed in Bitbucket
  • Participated in implementing Branching and merging strategies
  • Participated in Scrum in executing Agile Scrum framework for their projects
  • Building/Maintaining Docker container clusters managed by Kubernetes Linux, Bash, GIT, Docker
  • Used Ansible along with Python for automation in Cloud
  • Worked on IBM UrbanCodeDeploy (uDeploy) (UCD(UDEPLOY)) and UrbanCodeRelease (UCR) for production deployments and Releases
  • Worked on Services and tools like Tibco and Informatica as FannieMae has on-Prem applications and components hosted on them
  • Utilized Kubernetes for the runtime environment of the CI/CD system to build, test deploy
  • Used boto3 libraries for Python automation coding
  • Worked on migrating data from On-Premises Oracle DB to AWS RDS and Aurora Databases using Flyway setup in UCD(UDEPLOY) and Jenkins
  • On-boarded Flyway setup for all Aurora Databases and used PAM for DB Password storage and retrieved using ObjectRefId
  • Created Ansible YAML scripts for Automation purposes
  • Created environments, components, and new processes for applications
  • Used Self Service scripts in UCD(UDEPLOY) to create anything new in UCD(UDEPLOY)
  • On-boarding any new application to UCD(UDEPLOY) and creating Jenkins Pipeline for new applications and components
  • Worked with SAML Template for AWS activities like Lambda creation, Cloudwatch log groups, SNS topics, S3 buckets and many more using the template
  • Used SQL Queries to extract data and reports for Statistics and Analytics purposes
  • Used Toad for Executing SQL commands and connect to Database and perform actions
  • Experience in developing system automation tools in Python
  • Work with the team on creation of CI & CD (continuous integration and continuous delivery) pipeline using Jenkins
  • Configured Jenkins for integrated source control, builds, testing, and deployment
  • Part of the Release Management channel and participated in all production deployments along with the release team
  • Documented all the procedures and scripts created for this project in Confluence and GITHUB WIKI
  • Used JIRA for creating and tracking issues
  • Environment: Linux (CentOS), AWS, Bitbucket, Ansible, CloudFormation, CloudWatch, CloudTrail, EC2, S3, RDS, Aurora, Jenkins, SQL, Agile/Scrum, Python, Kubernetes, Java, Shell, Maven, SonarQube, Oracle 12C, IBM UCD(UDEPLOY), UCR, Informatica, Tibco, Unix, Toad.

DevOps Engineer

Comcast
06.2016 - 03.2020
  • Maintained automated build system, Maven and implemented new features or scripts for the build system
  • Worked on Jenkins for Continuous Integration (CI) Purposes and Continuous deployment (CD)
  • Purposes
  • Managed and controlled the source code repository, currently housed in GIT
  • Participated in implementing Branching and merging strategies
  • Participated in Scrum in executing Agile Scrum framework for their projects
  • Configuration Automation using Ansible
  • Worked with Docker container snapshots, attaching to a running container, managing containers, directory structures and removing Docker images
  • Used Rancher to deploy scale, load balance, scale and manage Docker containers
  • Monitoring SQL server performance using SQL Server profiler to find long running queries and dead‐locks
  • Building/Maintaining Docker container clusters managed by Kubernetes Linux, Bash, GIT, Docker
  • Utilized Kubernetes and Docker for the runtime environment of the CI / CD system to build, test deploy
  • Installed Pivotal Cloud Foundry (PCF) on instances to manage the containers created by PCF
  • Used Jenkins as Continuous Integration tools to deploy the Spring Boot Microservices to Pivotal Cloud Foundry (PCF) using build pack
  • Upgrading existing Virtual Machine from Standard to Premium Storage Account
  • Patching and Validating Virtual Machine in Azure
  • Experience in using Azure Vault keys to encrypt keys and Passwords
  • Responsible for a full development life cycle, including design, coding, testing and deployment
  • Created Ansible YAML scripts for Automation purposes
  • Used SQL Queries to extract data and reports for Statistics and Analytics purposes
  • Extensive knowledge on various Docker components like Docker Engine, Hub, Machine, Compose and Docker Registry
  • Developed and maintained infrastructure built around Docker containers
  • Create database maintenance plans for the performance of SQL Server including database integrity checks, update database statistics, re‐indexing and data backups
  • Worked on AppDynamics for API performance Monitoring and created alerts and dashboards for errors and metrics
  • Have done performance Engineering tasks using JMeter and LoadNinja for both API and UI
  • Performance analysis
  • Triaged and supported Testing teams with Production and lower environment issues
  • Used Splunk and Kibana for getting the logs and metrics
  • Created and generated Daily Tableau reports for Production Statistics and sales
  • Installed and maintained Docker, Jenkins and Rancher on Linux CentOS servers and worked closely with the Network team regarding any server related issues, upgrades, and Security Patching tasks
  • Experience in developing system automation tools in Python
  • Work with the team on creation of CI & CD (continuous integration and continuous delivery) pipeline using Jenkins and integrated Sonar for code coverage analysis
  • Worked with Rancher and Docker to deploy production and development stacks
  • Configured Jenkins for integrated source control, builds, testing, and deployment
  • Part of the Release Management channel and participated in all production deployments along with the release team
  • Environment: Linux (CentOS), AWS, Azure, GithuB, Ansible, Docker, OpenStack, Pivotal Cloud Foundry, Jenkins, SQL, Agile/Scrum, Kubernetes, OpenShift, Python, Java, Shell, Maven, Tableau, AppDynamics, Splunk, Kibana, SonarQube, Oracle 12C.

Build & release engineer

Apollo Health
08.2013 - 08.2014
  • Created and maintained Ant and Shell scripts for automating build and deployment process for
  • Linux environments
  • Converted and automated builds using Maven and Ant
  • Scheduled automated nightly builds using Jenkins
  • Worked with Quality Assurance department to develop and improve process automation
  • Implement, maintain and support reliable, timely and reproducible builds for project teams
  • Maintained build related scripts developed in ANT, Python and shell
  • Modified build configuration files including Ant's build.xml
  • Worked with development team to migrate Ant scripts to Maven
  • Developed HTML parser and the build the DOM tree with that
  • Experienced in authoring pom.xml files, performing releases with the Maven release plugin, Mavenization of Java projects and managing Maven repositories
  • Researched and implemented code coverage and unit test plug‐ins with Maven/Hudson
  • Configured virtualization on Linux using VMware and Virtual Box
  • Used ANT and MAVEN for building the applications and developing the build scripts
  • Involved in editing the existing ANT files in case of errors or changes in the project requirements
  • Involved in migrating data from CVS to ClearCase using Clear Case import tools
  • Converted old builds using MAKE to ANT and XML for doing Java build
  • Worked in creating WebSphere Application Server Clustered Environments and handling Load Balancing for QA, UAT and Production
  • Helped developers and other project teams to set views and environments
  • Expertise in Shell, Perl, Ruby, Python for Environment Builds and Automating and deployment on WebSphere Application Servers and WebLogic Application Servers.

Education

Master’s - Electrical Engineering

University of Hartford
West Hartford, CT

Skills

  • Technical skills:
  • Scripting Languages
  • HTML, Shell, Python, Terraform
  • SCM Tools
  • Github, BitBucket, Azure DevOps
  • Atlassian Tools
  • Jira, Confluence
  • Configuration Management
  • Chef, Ansible
  • Build Tools
  • Maven, Gradle
  • CI Tools
  • Jenkins/Hudson, TFS, Harness, Azure DevOps
  • Operating Systems
  • MS Windows /10/08/07 UNIX, Linux, MacOS
  • Cloud
  • Pivotal Cloud Foundry, AWS, Azure
  • Database
  • Oracle11g, 12C, Aurora, RDS, Redshift, PostgreSQL, CosmosDB, MYSQL
  • Servers
  • Apache, Tomcat, WebLogic, WebSphere
  • Cloud computing
  • Azure DevOps, AWS

Additional Information

  • Worked on AWS components deployment and created pipelines for cloud deployment using Jenkins and UCD(UDEPLOY). Experienced in IBM UrbanCodeDeploy(uDeploy), urbanCodeRelease for releases and deployments. Worked on numerous AWS resources and used Terraform and CloudFormation for creating and maintaining them. Worked on AWS IAM and its features including, roles, users, policies, and MFA. Worked with AWS admin access role and responsible for maintaining everything in the project related to AWS. Worked with Data Engineering and ETL teams and supported the processes by providing resources and required permissions in AWS like S3 Buckets, policies, AWS keys, roles and required permissions. Understanding of Cloud Environment like AWS, Azure, PCF Pivotal Cloud Foundry. Worked on cloud migrations from PCF to Azure and planned the migration milestones by myself. Moved the code base from GitLab to GitHub SaaS using clobber tool. Good at manage hosting plans for Azure Infrastructure, implementing and deploying workloads on Azure virtual machines (VMs). Worked on Azure DevOps (ADO) Pipelines, repositories and maintaining projects. Provisioned resources using Terraform like Azure service Bus (ASB), App insights, CosmosDB, Blob storage accounts, Azure key vault, AAD registrations and end to end application installations. Worked on creating the GitHub-actions using workflows and workflow triggers for multiple branches and thus helping branching strategy. Installed and configured the CI/CD and SCM tools for projects which includes Jenkins, Github, Jira, ADO. Good knowledge and experience in using Splunk, Kibana, Prometheus and Grafana for logging and monitoring. Monitored Docker containers and Kubernetes infrastructure using Prometheus and Grafana. Proficient with container systems like Docker and container orchestration like EC2 Container Service, Kubernetes, worked with Terraform.

Timeline

DevOps Engineer

GEICO
04.2023 - 10.2023

DevOps Engineer

Investors Bank
09.2021 - 04.2023

Azure DevOps Engineer

Kroger Technology
03.2021 - 09.2021

AWS DevOps Engineer

Fannie Mae
03.2020 - 03.2021

DevOps Engineer

Comcast
06.2016 - 03.2020

Build & release engineer

Apollo Health
08.2013 - 08.2014

Master’s - Electrical Engineering

University of Hartford
Raghu Vamsi Chimata