Summary
Overview
Work history
Education
Skills
Certification
Timeline
Generic

Poornima Aranganathan

Hounslow,HNS

Summary

Accomplished data professional with expertise in Azure Data Services, data modelling and warehousing, and data reporting and visualisation. Demonstrates strong capabilities in security, governance, and monitoring while effectively collaborating with cross-functional teams including product owners, business analysts, and leadership. Skilled in presenting insights through reports, dashboards, and live demos to diverse stakeholders. Proficient in Agile/Scrum methodologies, contributing to sprint planning and reviews. Committed to optimising data flows for enhanced efficiency and reliability while mentoring team members to uphold best practices.

Overview

2026
2026
years of professional experience
4
4
years of post-secondary education
1
1
Certification

Work history

Data Operations Engineer/Data Analyst (DataOps)

Revantage UK
08.2023 - 06.2025
  • Design, build, and monitor data pipelines using Azure Data Factory (ADF) for data ingestion, transformation, and orchestration.
  • Develop ETL/ELT processes to move data from on-premises to Azure cloud environments.
  • Document known issues and resolutions within Confluence to build a robust knowledge base.
  • Collaborate with data analysts, data scientists, and business stakeholders to provide production support for failures.
  • Manage and optimize Azure Data Lake (Gen1/Gen2) for scalable data storage.
  • Utilize Azure Databricks for large-scale data processing and transformation using Spark and DBT.
  • Test DBT models locally to identify and resolve errors before deployment.
  • Handle data bricks workflow failures and cluster terminations errors in data bricks due to spot instances.
  • Implement data quality checks and validation using Azure Data Factory pipelines.
  • Implement role-based access control (RBAC) and data encryption using Azure Key Vault.
  • Rotate service principal client secrets using Azure App Registrations to maintain secure access.
  • Create Databricks Personal Access Tokens (PAT) for Azure Service Principal (SPN) authentication.
  • Update Azure DevOps Personal Access Tokens (PAT) for service connections to maintain CI/CD integrations.
  • Implemented Multi-Factor Authentication (MFA) for Snowflake users as part of security measures across multiple portfolio companies, ensuring enhanced data protection and compliance with security standards.
  • Set up Logic Apps to monitor Azure resources and integrate with Microsoft Defender for Cloud Security to ensure compliance.
  • Monitor production data pipelines and job performance using Azure Monitor and Log Analytics.
  • Monitored production pipeline failures for multiple portfolio companies, proactively identifying issues and collaborating with team leaders and data architects to implement quick and effective fixes, ensuring minimal downtime.
  • Create interactive reports and dashboards using Power BI connected to Azure Data Sources and scheduled data refresh.
  • Monitor and respond to data pipeline incidents and failures reported through ServiceNow.
  • Perform root cause analysis for data incidents, and create and maintain incident reports in ServiceNow.
  • Collaborate with IT support and engineering teams to resolve data-related incidents efficiently.
  • Collaborating with Data Architects to identify solutions to minimize production failures.
  • Proactively identified a critical issue impacting the data pipeline in Azure Data Factory, causing disruption to data ingestion and immediately initiated troubleshooting and applied a temporary fix to restore the data flow.
  • Notified relevant stakeholders via Teams and ServiceNow, providing updates on mitigation progress and RCA.
  • Adhere to ITIL best practices for incident management, including ticket creation, escalation, resolution, and closure.

Extended Stay America (ESA) - Blackstone

Revantage North America
09.2022 - 12.2022
  • Processing raw, unstructured, and semi structured data from various data sources including Amazon s3 bucket using Azure data pipelines.
  • Using Databricks to process csv, txt file-based data and data from on premise servers.
  • Invoke Tableau REST API from Azure Databricks to trigger data source refresh job at Tableau portal.
  • Created email notification framework using SendGrid in data bricks notebook to set up alerts for success/failure of processing a file and send it to the file owner.
  • Set up Triggers in Azure Data Factory to trigger the data pipeline at a specific time for checking latest files in s3 bucket and processing the data.
  • Involved in building database Model, APIs and Views utilizing python, to build an interactive web-based solution integrated with the data lake for analysis.
  • Worked on Data vault and Data Mart models.
  • Extensively involved in setting up data warehouse by following best practices using Kimball Methodology and developed ETL Packages using Data Manager.
  • Worked closely with the Marketing, Operations and Data Insights and Visualization teams to identify trends and data sets for the migration.
  • Developed normalized Logical and Physical database models to design OLAP system.
  • Analyzing and monitoring performance bottlenecks and key metrics to optimize software and system performance.
  • Created various branches for each requirement and merged from development to release branch, created tags for releases using GIT in Azure Devops.
  • Worked with Managed File transfer teams to get source data files for processing.

Kerbey

Revantage - Blackstone
04.2022 - 09.2022
  • Processing raw and unstructured data using Azure.
  • Built data pipelines for processing the data based on the medallion architecture.
  • Set various data quality rules to ensure there is no data duplication or data inconsistency.
  • Built various data pipelines for processing data from different sources and providing the processed data for reporting using Power BI.
  • Worked on creating Scheduled triggers in ADF to automate the pipeline execution on a daily basis.
  • Implemented Deletion capture feature to detect deleted rows in Ingested entities.
  • Using Databricks for various data transformation with pyspark.
  • Created pipeline for snowflake data validation between source, prepared and snowflake.
  • Debugged the snowflake pipeline to check for any errors during execution.
  • Set up email notification framework to send alerts when there is success/failure of data pipelines.
  • Processed the files uploaded by file owners using Azure Data factory pipeline and successfully loading files as parquet into ADLS.
  • Monitored the pipeline activity in Azure monitor to check for failures.

Data Optimization Framework (DOF)

Revantage - Blackstone
London
09.2021 - 03.2022
  • Built the ingestion ETL pipeline from the on-premises Oracle server via Azure Data factory to the Azure Blob storage.
  • Data Sharing in Snowflake using Snowflake Connectors.
  • Implemented Snowflake masking policy for PII information in various snowflake database and tables created.
  • Worked on Data ingestion, data validation and transformations using Azure Data factory and Databricks notebooks.
  • Worked on Data Deduplication to verify the source data is validated for duplicates.
  • Configured the data factory pipeline with the change data capture activity for the Delta tables.
  • Stored the credentials of the linked services in the Azure secure key vault and created a scope in Data factory.
  • Configured the notebooks for the transaction data using Pyspark to load the data and installed various libraries.
  • Implemented Data Encryption at REST for protecting sensitive data with Column level Encryption in Databricks.
  • Created API management service to publish, consume and manage API’s running on Microsoft Azure platform.
  • Worked on Azure functions App to decrypt the data via API.
  • Created Job Clusters in Databricks for cost reduction and running automated tasks.
  • Implemented Data Reconciliation for the verification of data where target data is compared with the source data.
  • Configured Azure Key Vault with all the credentials and created secret scope at Databricks.

Data Operations Engineer

Cognizant
  • Involved in analysis and discovery of data from the Oracle server hosted on-premises.
  • Built the ingestion pipeline from the on-premises Oracle server via Azure Data factory to the Azure Blob storage.
  • Worked on Data ingestion, data validation and transformations using Azure Data factory and Databricks notebooks.
  • Configured the data factory pipeline and created Delta tables.
  • Stored the credentials of the linked services in the Azure secure key vault and created a scope in Data factory.
  • Involved in building database Model, APIs and Views utilizing python, to build an interactive web-based solution integrated with the data lake for analysis.
  • Extensively involved in setting up data warehouse by following best practices using Kimball Methodology and developed ETL Packages using Data Manager.
  • Worked closely with the Marketing, Operations and Data Insights and Visualization teams to identify trends and data sets for migration.
  • Developed 3NF Logical and Physical database models design in SQL Server DW.
  • Worked on query fine tuning the stored procedures.
  • Developed normalized Logical and Physical database models to design OLAP system.
  • Analyzing and monitoring performance bottlenecks and key metrics to optimize software and system performance.
  • Provided daily monitoring, management, troubleshooting and issue resolution to systems and services hosted on cloud resources.
  • Provided support for the tickets raised and provided the automated solution to keep customer information up to date on legacy systems.
  • Monitored data and tidying up records daily.

Education

Bachelor of Engineering - Computer Science

ANNA UNIVERSITY
Chennai, Tamil Nadu
08.2010 - 08.2014

Skills

  • Azure Data Services
  • Data Modelling & Warehousing
  • Data Reporting & Visualization
  • Security, Governance & Monitoring

Soft Skills & Collaboration

  • Clearly document and report on data pipelines, architecture, and technical processes
  • Collaborate with Product Owners, Business Analysts, and leadership teams across different portfolios
  • Present insights effectively using reports, dashboards, and live demos to both technical and non-technical stakeholders
  • Work within Agile/Scrum methodologies and contribute to sprint planning and reviews
  • Collaborate closely with cross-functional teams, including data scientists, analysts, and engineers
  • Share knowledge and provide mentorship to support team development and best practices
  • Troubleshoot performance issues and identify optimization opportunities in data workflows
  • Optimize data flows and query execution to improve efficiency and reliability

Certification

  • Microsoft Certified: Azure Data Engineer Associate
  • Databricks Certified Data Engineer Associate
  • Databricks Accredited Lakehouse Fundamentals
  • DBT Fundamentals – dbt Labs

Timeline

Data Operations Engineer/Data Analyst (DataOps)

Revantage UK
08.2023 - 06.2025

Extended Stay America (ESA) - Blackstone

Revantage North America
09.2022 - 12.2022

Kerbey

Revantage - Blackstone
04.2022 - 09.2022

Data Optimization Framework (DOF)

Revantage - Blackstone
09.2021 - 03.2022

Bachelor of Engineering - Computer Science

ANNA UNIVERSITY
08.2010 - 08.2014

Data Operations Engineer

Cognizant
Poornima Aranganathan