
Data Engineer
Responsibilities
Qualifications & Requirements
Experience Level: Mid Level
Full Job Description
About The Job
As a Data Engineer for the Enterprise Platforms team at Google, you will be instrumental in building and maintaining the critical data infrastructure that drives our product strategy. Your responsibilities will include designing, developing, and optimizing data pipelines to ensure high data quality and accessibility for advanced analytics. Your technical expertise will empower the product team to leverage data-driven insights for optimizing product feature adoption, performance, and measuring the impact of strategic initiatives.
To accelerate the growth and market leadership of Enterprise Buying Platforms (DV360 and SA360) in Bengaluru, India, you will address key business questions and deliver actionable, data-driven insights that shape product and commercial strategy. The Enterprise Platform Data Science team will provide quantitative support, market understanding, and a strategic perspective to our partners across the organization, working in close collaboration with the Ads and Commerce Finance team.
Google Ads is dedicated to powering the open internet with cutting-edge technology that fosters connections and creates value for individuals, publishers, advertisers, and Google itself. Our diverse teams are responsible for building Google's Advertising products, encompassing search, display, shopping, travel, and video advertising, alongside analytics. We are committed to creating trusted experiences that bridge people and businesses through valuable advertising. We assist businesses of all scales, from small enterprises to major brands and YouTube creators, with effective advertiser tools that yield measurable results. Furthermore, we enable Google to engage with customers at scale.
Responsibilities
- Develop and deliver best-practice recommendations, tutorials, blog articles, sample code, and technical presentations, adapting the approach and messaging to suit various business and technical stakeholders.
- Design, develop, and maintain scalable and reliable data pipelines for collecting, processing, and storing data from diverse sources.
- Implement robust data quality checks and monitoring systems to ensure data accuracy and integrity.
- Collaborate effectively with cross-functional teams, including data science, engineering, product managers, sales, and finance, to understand data requirements and deliver high-impact data solutions.
- Optimize data infrastructure for enhanced performance, efficiency, and scalability to meet evolving business needs.
Minimum qualifications:
- Bachelor's degree in Computer Science, Mathematics, a related field, or equivalent practical experience.
- 1 year of experience with data processing software (e.g., Hadoop, Spark, Pig, Hive) and algorithms (e.g., MapReduce, Flume).
- Experience with database administration techniques or data engineering, alongside proficiency in Java, C++, Python, Go, or JavaScript.
- Experience managing client-facing projects, troubleshooting technical issues, and collaborating with Engineering and Sales Services teams.
Preferred qualifications:
- Experience with data warehouses, including data warehouse technical architectures, infrastructure components, ETL/ELT, and reporting/analytic tools and environments.
- Experience building multi-tier high availability applications using modern web technologies (e.g., NoSQL, MongoDB, SparkML, TensorFlow).
- Experience in big data, information retrieval, data mining, or machine learning.
- Experience architecting, developing software, or building internet-scale production-grade big data solutions in virtualized environments.