
Ujjivan Small Finance Bank•1d ago
Naukri
Data Engineer
Bengaluru
Mid Level
N/A
N/A
N/A
Full Job Description
ROLE PURPOSE & OBJECTIVE
- The role will be responsible for database design & development, data ingestion, data governance, maintenance and deliver reusable datasets
- Engages with business stakeholders and technology teams to ensure quality and timely delivery of data / information
- To drive and build solutions for our internal (Data Science & Decision Management team) consumption within a ??new data platform? as well as creation of dashboards
KEY DUTIES & RESPONSIBILITIES OF THE ROLE
Business/ Financials
- To design, create, maintain, expand and optimize the data pipeline architecture under Bank''s data and analytics platform
- Help to improve data management processes - acquiring, transforming and storing massive volumes of structured and unstructured data
- Design, develop and test ETL/ELT processes and data pipelines
- Join data across multiple data environments (Data lakes / Data Warehouses) using complex optimized queries
- Prepare metadata and ingest data from source to data warehouse following Bank''s ingestion framework
- Identify and tune performance issues, monitoring data integration and ingestion jobs in multiple environments
- Identify, design and implement process improvements (automate manual processes, optimize data delivery, etc.)
- Automate system operations and establish zero touch routines
- Contribute to traditional data management systems as well as modern ones like Hadoop, Spark, Kafka, etc.
- Perform job scheduling and impact analysis
- Develop analytics dashboards
- Support ad-hoc business analytics request and process automation tasks
- Ensure on-time delivery and quality of data pipeline and dashboards
- Conducts performance analysis, code reviews and tuning on regular basis.
- Help develop new solutions for batch and real-time data and analytics use cases
Customer (Both Internal & External)
- Work with stakeholders including the senior management, platform team, technology team and design teams to assist with data-related technical issues and support their data infrastructure needs
- Discover the data acquisition opportunities, analyze and optimize data sourcing to ensure the quality data
- Work together with other data engineers, data architects, data scientists and data visualization experts and client product owners to deliver exceptional client services
- Build trusting relationships with stakeholders by consistently eliciting requirements that meet and deliver upon their business needs
Internal Process
- Work with data scientists on data transformation and compute to prepare for advanced analytics and dashboards
- Partners with other internal teams and peers in the department to ensure holistic Big Data, Machine Learning solutions that meet the needs of various stakeholders
Innovation & Learning
- Supports Innovation - Identify new areas of improvement for data infrastructure, big data technology with an eye to solve business problems
- Communicate the importance of managing data as an asset and shape the direction of data management capabilities that improve Bank''s data management landscape
- Coaching and mentoring to junior team members, as required
MINIMUM REQUIREMENTS OF KNOWLEDGE & SKILLS
Educational Qualifications
- Degree in Computer Science, Software Engineering, Information Systems, Mathematics or other STEM majors
- Professional software development experience (one or more) with Scala, Spark, Hadoop, Java, Linux, and SQL
- 3+ years of experience in the big data ecosystem, with Hadoop (Pig, Hive, HDFS), Apache Spark and NoSQL/SQL databases
- Experience using ETL big data pipelines (Apache Airflow)
Experience(Years and Core Experience Type)
- 2-7 years of experience as a Data or BI Engineer dealing with large complex data scenarios in analytics environment
Certifications
- Experience with analytics programming languages (Python and R)
- Experience with visualization tools such as Tableau
- Experience with Agile software development
Functional Skills
- A passion for working with large, structured and unstructured datasets
- Experience in data modelling and schema design
- Experience in data integration (ETL/ELT) process design, development and testing
- Experience with various database types (relational, columnar, distributed, NoSQL ?) and eager to learn new ones
- Experience with Agile methodologies and tools
- Experience in building big data pipelines, architecture and data sets is a major asset
- Expertise in using SQL & should be comfortable with writing advanced SQL queries
- Skilled in SQL Query Optimization
- Coding proficiency in at least one modern programming language (Python is preferred)
- Good problem solving and analytical skills
- Exposure/Prior experience with Cloud platforms (GCP / AWS / Azure)
- Good to have experience with Big Data Technologies (Hadoop, Hive, Hbase, Pig, Spark, etc.)
Behavioral Skills
- Ability to work in a structured, efficient manner
- Ability to perform root cause analysis on data processes and pipelines to answer specific business question, solve issues and identify opportunities for improvement
Competencies
- Support transformation processes towards a data centric culture
- Willingness to take on additional tasks and/or duties as needed
- Knowledge of at least one ETL software technology and scripting languages
- Drive a culture of data management and contribute to enhanced operational efficiency by incorporating use of best practices, methodologies into change management processes
Company
Ujjivan Small Finance Bank
Bengaluru
Posted on Naukri