
Dun & Bradstreet•2h ago
Naukri
Data Engineer
Hybrid - Hyderabad
Full Time
Mid Level
N/A
N/A
N/A
Qualifications
10/10 matched
Experience Level: Mid Level
- </strong></p><p> </p><ul><li>Bachelor's degree in Computer Science
- Information Technology
- or a related field.</li><li>2+ years of experience and demonstrated in-depth knowledge of data analysis
- querying languages
- data modelling
- and the software development life cycle.</li><li>Strong skill in SQL (preferably BigQuery).</li><li>Agile mindset and understanding of agile project management (Scrum/Kanban).</li><li>Understanding of Database design
- modelling
- and best practices.</li><li>Experience with cloud computing technologies (preferably GCP).</li><li>Experience with PowerBI
- Looker or similar data visualisation tool.</li><li>Analytical
- process improvement and problem-solving skills.</li><li>Good communication skills and the ability to articulate data issues and solutions.</li><li>Commitment to meet deadlines and uphold the release schedule
Full Job Description
Dun & Bradstreet seeks a skilled Data Engineer to join our Hyderabad hybrid team. In this pivotal role, you will be instrumental in upholding the integrity and quality of our data, empowering informed decision-making and fostering business growth. You'll collaborate closely with diverse teams to ensure the accuracy, consistency, and reliability of our data assets. Your responsibilities will include partnering with stakeholders and fellow data engineers to develop data processing monitoring systems, automate data workflows, and drive ongoing enhancements.
Key Responsibilities:
- Implement a comprehensive data processing strategy aligned with organizational standards and business objectives.
- Develop a deep understanding of Dun & Bradstreet's inventory data.
- Conduct baseline data processing monitoring to proactively identify and address issues.
- Apply advanced data analysis and profiling techniques.
- Automate data processing monitoring solutions and internal workflows.
- Work within well-defined data models to ensure organized data storage.
- Utilize PowerBI and/or Looker for designing, creating, and administering dashboards that yield insights from data processing monitoring results.
- Execute a robust data processing framework incorporating automated testing.
- Communicate effectively with globally distributed stakeholders using JIRA and Confluence.
- Accurately capture requirements and gain a thorough understanding of use cases.
- Propose improvements to the data processing teams' internal processes.
- Generate regular reports on key data processing metrics.
- Review data to identify patterns or trends indicative of processing errors.
- Maintain detailed documentation of data processing procedures and findings.
- Adhere strictly to data governance policies and procedures.
- Continuously educate yourself on industry best practices and emerging technologies in data processing.
Required Skills:
- Bachelor's degree in Computer Science, Information Technology, or a related discipline.
- Minimum of 2 years of experience with demonstrated in-depth knowledge of data analysis, querying languages, data modeling, and the software development lifecycle.
- Proficiency in SQL, with a preference for BigQuery.
- An agile mindset and familiarity with agile project management methodologies (Scrum/Kanban).
- Solid understanding of database design, modeling, and best practices.
- Experience with cloud computing platforms, preferably Google Cloud Platform (GCP).
- Hands-on experience with data visualization tools such as PowerBI, Looker, or similar.
- Strong analytical, process improvement, and problem-solving abilities.
- Excellent communication skills, with the ability to clearly articulate data-related issues and proposed solutions.
- A strong commitment to meeting deadlines, upholding release schedules, and fostering excellent teamwork.
Valuable Additional Skills:
- Proficiency in Python and/or Scala for data wrangling and analysis.
- Understanding of DevOps best practices, including CI/CD, automation, monitoring, observability, agile project management, version control, and continuous feedback loops.
- Experience with data observability tools like Acceldata or Informatica DQ.
- Familiarity with XML and JSON data structures.
- Knowledge of ETL processes and their impact on data processing.
- Exposure to Machine Learning concepts, particularly anomaly detection.
- Experience working effectively as part of a globally distributed team across different time zones.
Company
Dun & Bradstreet
Hybrid - Hyderabad
Posted on Naukri