Data Engineer - Pune (Hybrid)
Join Astar Data, a dynamic company working with diverse clients from startups to Fortune 500 enterprises. We are seeking a detail-oriented, self-starter to contribute to our engineering and analytics teams as a Software Development Engineer. This role is integral to a growing team focused on building world-class, large-scale Big Data architectures. You will leverage your strong understanding of programming principles and proficiency in languages like Java or Python, spending a significant portion of your time coding.
Key Responsibilities:
- Software Development Best Practices:
- Actively code using languages such as Python or Pyspark.
- Gain hands-on experience with the Big Data ecosystem, including Hadoop, Mapreduce, Spark, Hbase, and ElasticSearch.
- Demonstrate a solid grasp of programming principles and development methodologies like version control (check-in policy), unit testing, and code deployment.
- Proactively learn new concepts and technologies, applying them to large-scale engineering solutions.
- Excel in application development and support, integration development, and data management.
- Client Engagement:
- Collaborate with Fortune 500 clients daily, understanding their strategic requirements and aligning Sigmoid's solutions.
- Technology Innovation:
- Stay abreast of the latest technological advancements to maximize ROI for clients and Sigmoid.
- Develop enterprise-level code with a focus on best practices.
- Design and implement robust APIs, abstractions, and integration patterns to address complex distributed computing challenges.
- Apply expertise in defining technical requirements, data extraction, data transformation, automating and productionizing jobs, and exploring new big data technologies within parallel processing environments.
- Culture and Mindset:
- Exhibit strategic thinking with an unconventional, out-of-the-box approach.
- Possess an analytical and data-driven mindset.
- Demonstrate raw intellect, talent, and energy.
- Embrace an entrepreneurial and agile spirit, understanding the demands of a high-growth private company.
- Balance leadership responsibilities with hands-on execution.
Qualifications:
- Proven track record of relevant work experience and a degree in Computer Science or a related technical field.
- Mandatory experience with functional and object-oriented programming in Python or Pyspark.
- Hands-on experience with the Big Data stack: Hadoop, Mapreduce, Spark, Hbase, and ElasticSearch.
- Solid understanding of AWS services and practical experience with APIs and microservices.
- Effective written and verbal communication skills.
- Ability to collaborate effectively with a diverse team of engineers, data scientists, and product managers.
- Comfort working in a fast-paced startup environment.
Preferred Qualifications:
- Experience with agile methodologies.
- Proficiency in database modeling and development, data mining, and warehousing.
- Experience in architecting and delivering enterprise-scale applications, including developing frameworks and design patterns. Ability to tackle technical challenges, propose comprehensive solutions, and mentor junior staff.
- Experience handling large, complex datasets from various sources.
