# Calvin Kishore > DevSecOps Engineer Location: New Hyde Park, New York, United States Profile: https://flows.cv/calvinkishore I built my career in DevSecOps. Cloud environments across AWS, Azure, GCP, and Prisma Cloud. Automated pipelines and systems engineered to stay secure under pressure. Now, I'm taking that same precision and applying it to AI. I started with data collection, understanding where data comes from, how it's gathered, and why quality at the source matters for everything that follows. That led me into data analysis, learning how to explore, clean, and find patterns before any model ever gets built. From there I moved into Machine Learning. Classical algorithms, model training, evaluation, and iteration. Then advanced ML, where the complexity deepens and the real engineering decisions begin. NLP came next, teaching machines to understand language. Which was then followed by deep learning and the neural network architectures that power modern AI. That naturally led to Transformers, the architecture behind almost everything cutting-edge today. From Transformers, Generative AI opened up. Building systems that don't just analyze, but create. And at the end of that road, Agentic AI. Autonomous systems that reason, plan, and take action on their own. Along the way I built real AI-powered systems: 🔹 Enterprise Workforce Performance & Attrition Risk Predictor An n8n pipeline that aggregates HR data, classifies employee attrition risk using AI, and auto-generates structured retention recommendations for HR teams. 🔹 Intelligent Revenue Leakage Detection System An AI-driven monitoring pipeline that analyzes transaction data, detects anomalies in real time, and generates plain-language explanations for finance review. What I bring that most AI candidates don't: Security-first mindset: secrets management, input validation, safe deployment Real automation depth: n8n, Ansible, Terraform, CI/CD across AWS, Azure, GCP Full Python stack: NumPy, Pandas, Scikit-learn, Matplotlib, Seaborn, LangChain, Jupyter Can go from raw data to a deployed agentic pipeline and visualize findings in Tableau ## Work Experience ### DevSecOps Engineer @ Sagent Jan 2023 – Jan 2026 · Integrated automated SAST/DAST security scanning into CI/CD pipelines, identifying and resolving critical vulnerabilities before production deployment. · Automated infrastructure provisioning and security compliance validation using Terraform and Python scripting, reducing manual and human effort. · Secured containerized workloads across Docker and Kubernetes, enforcing image scanning policies, runtime security controls and network segmentation. · Managed secrets and credential lifecycle with HashiCorp Vault, enforcing least privilege access patterns across all engineering teams. · Managed work items, boards and sprint planning using Azure Boards within Azure DevOps. · Implemented branch policies, pull request approvals, and code review workflows within Azure Repos. · Build reusable YAML pipeline templates in Azure DevOps to standardize deployment processes across multiple projects. · Monitored and prioritized critical vulnerabilities based on CVSS scores and risk metrics using dashboards within Rapid7 InsightVM. · Performed vulnerability triage using Checkmarx One by reviewing scan results and prioritizing findings based on severity, exploitability, and business impact. · Leveraged Visual Studio Code to develop, debug, and manage infrastructure and application code, integrating Git version control and extensions to streamline CI/CD pipeline development and DevSecOps automation workflow. ### Infrastructure Specialist @ Kyndryl Jan 2023 – Jan 2023 · Used 1Password to securely store and manage credentials, ensuring secure access to applications and systems. · Setup and implement Microsoft 365DSC with Azure Devops to automate Exchange online, Teams and Security. · Use of Box to keep documentation organized and accessible. · Use of Mural to collaborate with team members regarding weekly sprints. · Ensure authentication methods such as MFA and Single Sign-On are enforced. · Monitor API usage and performance metrics using Azure monitor to identify and address issues. · Setup pipeline testing with current scripts to review if workflow needs to be modified. · Demonstrated experience supporting enterprise Active Directory environments. · Attend daily Scrum meetings and complete weekly sprints. · Manage Identity Access management of Azure AD and Azure Subscriptions. · Produce documentation regarding Azure Devops and M365DSC. · Manage and complete tickets allocated within JIRA. · If needed, use of Microsoft Visio to transform ideas into visual diagrams. · Use of Visual Studio Code to support development operations. · Implement API gateways and management platforms to secure, monitor and manage API’s effectively. · Configure Virtual Machines, Storage accounts and resource groups on Azure. · Integrated security practices into DevOps workflows to ensure Microsoft Dynamics 365 deployments adhered to compliance requirements and best practices. · Incorporated automated testing frameworks into the DevOps pipeline for Dynamic 365 to ensure code quality and reduce manual testing. · Implemented version control best practices using Git within Azure DevOps for Dynamics 365 solutions to enable better collaboration and traceability. ### Senior Infrastructure Engineer @ Bravo Wellness Jan 2022 – Jan 2023 • Integrated API’s into cloud-based applications and workflows, allowing streamlined data exchange and process automation. • Solved tickets allocated by managers through JIRA and utilizing user stories for viewing and managing ticket progress. • Utilized OKTA SSO to securely access enterprise applications. • Support Microsoft Active Directory (group memberships, shared drives access, policies.) • Access to Microsoft 365 admin center to make sure users are in the necessary groups, change licenses, resetting passwords. • Design, Plan and Migrate deployments of the company’s on prem data center/applications to Azure by utilizing Cloud adoption. • Led the migration of on-premises Dynamic 365 to Azure, leveraging DevOps tools for a seamless transition and improved scalability. • Managed and Configured Prisma Cloud to ensure cloud security for AWS and GCP environments. • Utilized Prisma Cloud’s vulnerability management features to identify, prioritize, and remediate security risks across cloud services. • Used SSH to access servers and install KVM to enable efficient virtualization for resource optimization. • Managed multiple Dynamics 365 environments such as Dev, Test and Prod using DevOps principles to optimize resources and deployment efficiency. • Implement and configure CSPM tools such as Azure Security Center to monitor cloud infrastructure configurations and assess compliance with security standards. · Manage helpdesk tickets through Freshservice program. • Managed and provisioned resources in AWS by utilizing services such as EC2, S3, and IAM for scalable cloud infrastructure. • Use of PRTG for monitoring purposes. • Utilize Gitlab to enforce access controls, branch protection rules and merge request approvals to maintain code integrity and security. • Use of Secret Server to access privileged accounts, applications, and services. • Configuration of (Palo Alto) firewalls Global Protect VPN, security policies, Data and URL filtering. ### Cloud Engineer @ The Port Authority of New York & New Jersey Jan 2020 – Jan 2022 • Configuring and assigning IAM permissions and roles for users and group management across clouds. • Creating lifecycle policies to manage S3 objects and setting retention periods for sensitive buckets. • Created and managed policies for S3 buckets and utilized S3 Buckets, Glaciers for storage and backup, archived in AWS, in addition to enabling versioning and life cycle management policies. • Created AWS/Azure infrastructure using Terraform to provision resources needed. • Created monitors, alarms, and notification for resources hosted using CloudWatch. • Managed AWS Services using AWS cli and terraform commands to list resources like S3, CloudFront, Cloud Watch, RDS, Route53, SNS, SQS, CloudWatch, AWS Config. • Document RQL policy configurations and usage guidelines to enhance knowledge and ensure consistency of policy enforcement. • Integrate CSPM tools with cloud-native services to provide security and enhance visibility into cloud security posture. • Monitoring AWS infrastructure through CloudWatch metrics and Cloudtrail. • Worked with Azure/ GCP to create backups, Virtual Machines, troubleshooting. • In GCP, designing, implementing, and maintaining data networks. • In GCP, creating solutions by using specific tools needed. • Developing and Deploying the latest software applications with GCP. • Monitoring GCP resources to make sure security features are protected against unauthorized modification. • Managing git branches for multiple releases for business applications using best practices. • Implementing Jenkins for pipeline releases and managing the Jenkins infrastructure. • Debugged network and system issues through ticket management using Jira user stories. • Deployed Splunk infrastructure running on EC2 for logging on resources and setting up dashboards. • Troubleshooting containers using Docker logs and cli commands to get RCA. • Created custom Ansible inventory files and implemented multiple variables. ### Linux System Administrator @ The Port Authority of New York & New Jersey Jan 2017 – Jan 2019 • Set up device discovery, SNMP polling and alerting rules in LibreNMS to monitor network infrastructure and identify issues. • Manage, provision, configure Linux servers in production (RHEL, CentOS, Ubuntu.) • Supporting and troubleshooting day to day operations, application support, user management, change management, and incident management on 24X7 production systems. • Configuring LVM (Logical Volume Management) for management of dynamic storage; creating physical volumes, volume groups and logical volumes, in addition to extending and resizing logical volumes. • Managed and access network infrastructure using Cisco Meraki’s dashboard to monitor and configure switches, access points and firewalls. • Debugging OS processes, sluggish servers, and intensive operations with system process tools (top, kill, renice) and through system activity reports for optimizing performance on mission critical servers. • Automate and schedule repetitive tasks, business operations, backups using cron and bash scripts. • Use of IDRAC and ILO to manage and monitor Dell and HP bare metal servers, including troubleshooting server hung issues and operating system filesystem corruptions. • Configuring Raid levels (0,1,5,6) on bare metal servers to optimize performance for Disaster Recovery procedures and improve redundancy of mission critical systems. • Using streamline editing tools (sed, awk, cut, sort.) • Configuring networking aspects such as DNS, DHCP, FTP, TFTP, PING, HTTPS, PXE BOOT. • Managed DHCP server and troubleshooting issues regarding leases and setting up DNS. • Utilized MySQL which is a relational database, to enable data to be stored and accessed across multiple servers, allowed to replicate data, and partition tables for improved performance. • Performing network scans using nmap and setting up sar reports based off resource reports. · Administered Linux Servers, Windows Operating System, and MacOS. ## Education ### Associate's Degree in Internet and Information Technology Queensborough Community College of The City University of New York (CUNY) ## Contact & Social - LinkedIn: https://linkedin.com/in/calvinkishore --- Source: https://flows.cv/calvinkishore JSON Resume: https://flows.cv/calvinkishore/resume.json Last updated: 2026-04-18