
Axa Xl•2h ago
Naukri
Sr. Associate Application Developer
Gurugram, Bengaluru
Full Time
Senior Level
N/A
N/A
N/A
Qualifications
10/10 matched
Experience Level: Senior Level
- </span></p><p><strong><span>Required Skills and Abilities:</span></strong></p><ul><li>Exposure with Informatica PowerCenter
- including mappings
- workflows
- session tuning
- and parameterization.</li><li>Expertise in Azure Databricks + PySpark
- including: Notebook development
- Cluster configuration and tuning
- Delta Lake (ACID
- versioning
- time travel)
Full Job Description
Senior Associate Application Developer - Gurugram/Bangalore
Join the Americas App Solutions team at Axa Xl and play a key role in designing, delivering, and supporting critical software solutions for the global Risk Management function. This role focuses on internally developed .NET/ETL applications, with significant upcoming projects including market-wide initiatives, security transformations, and major cloud migrations. We are seeking an experienced ETL Developer to enhance and maintain our enterprise data integration workflows, with a strong emphasis on building scalable and reliable ETL pipelines in the Azure cloud.
Key Responsibilities:
- Ensure the operational reliability and data accuracy of existing ETL workflows through ongoing maintenance, monitoring, and troubleshooting.
- Develop and extend ETL processes to accommodate new data sources, evolving business logic, and enhance scalability.
- Create and orchestrate PySpark notebooks within Azure Databricks for efficient data transformation, cleansing, and enrichment.
- Configure and optimize Azure Databricks clusters for peak performance and cost-effectiveness.
- Implement robust Delta Lake solutions, leveraging ACID compliance, versioning, and time travel for dependable data lake operations.
- Automate complex data workflows using Databricks Jobs and Azure Data Factory (ADF) pipelines.
- Design and manage scalable ADF pipelines, incorporating parameterization and reusable integration patterns.
- Utilize Spark APIs to integrate with Azure Blob Storage and ADLS Gen2 for high-performance data ingestion and output.
- Uphold data quality, consistency, and governance standards across both legacy and cloud-based data pipelines.
- Collaborate effectively with data analysts, engineers, and business stakeholders to deliver clean, validated data essential for reporting and analytics.
- Engage fully in the Software Development Life Cycle (SDLC), from initial design through to deployment, with a focus on creating maintainable and audit-ready solutions.
- Develop efficient and maintainable ETL logic and scripts, adhering to best practices in security and performance.
- Diagnose and resolve pipeline issues across various data infrastructure layers, ensuring operational continuity.
- Produce comprehensive documentation for technical designs, workflows, and data processing logic to support long-term maintainability and knowledge transfer.
- Actively research and recommend emerging cloud and data engineering technologies to drive innovation and continuous improvement.
- Adhere strictly to internal controls, audit protocols, and secure data handling procedures to ensure compliance and operational excellence.
- Provide accurate time and effort estimations for development tasks, considering complexity and potential risks.
Required Skills and Abilities:
- Demonstrated experience with Informatica PowerCenter, including mappings, workflows, session tuning, and parameterization.
- In-depth expertise in Azure Databricks and PySpark, covering notebook development, cluster configuration and tuning, Delta Lake implementation (ACID, versioning, time travel), job orchestration (Databricks Jobs, ADF), and integration with Azure storage via Spark APIs.
- Substantial hands-on experience with Azure Data Factory (ADF), including pipeline development and management, parameterization, dynamic datasets, notebook integration, and pipeline monitoring.
- Strong proficiency in SQL, PL/SQL, and scripting languages such as Python, Bash, or PowerShell.
- A solid understanding of data warehousing principles, dimensional modeling, and data profiling techniques.
Desired Skills and Abilities:
- Familiarity with Git, CI/CD pipelines, and modern DevOps methodologies.
- Working knowledge of data governance frameworks, audit trails, metadata management, and compliance regulations like HIPAA and GDPR.
- Exceptional problem-solving and troubleshooting skills, with a proven ability to resolve performance bottlenecks and job failures.
- Awareness of Azure services such as Functions, App Services, API Management, and Application Insights.
- Understanding of Azure Key Vault for secure management of secrets and credentials.
- Experience with Spark-based big data ecosystems (e.g., Hive, Kafka) is advantageous.
- Server administration and database optimization experience is a plus.
You will report to the Application Manager.
Company
Axa Xl
Gurugram, Bengaluru
Posted on Naukri