Data Engineer - Noida (Hybrid)
Join Astar Data in Noida as a Data Engineer for a hybrid role contributing to world-class Big Data architectures.
About the Role
Sigmoid is seeking a detail-oriented and self-starting Data Engineer to support our engineering and analytics teams. This role is crucial for building large-scale Big Data architectures and requires a strong understanding of programming principles and extensive coding experience. You will spend a significant portion of your time coding and collaborating with a growing team.
Key Responsibilities
Development Best Practices
- Act as a hands-on coder with proficiency in programming languages like Python or PySpark.
- Gain practical experience with the Big Data stack including Hadoop, MapReduce, Spark, HBase, and Elasticsearch.
- Possess a solid grasp of programming principles and development methodologies such as check-in policies, unit testing, and code deployment.
- Demonstrate initiative in learning new concepts and technologies and applying them to large-scale engineering developments.
- Bring excellent experience in application development and support, integration development, and data management.
Client Alignment
- Engage daily with clients, including Fortune 500 companies, to understand and fulfill strategic requirements.
Technology Advancement
- Stay abreast of the latest technological advancements to maximize ROI for clients and Sigmoid.
- Leverage hands-on coding experience with an understanding of enterprise-level code.
- Design and implement APIs, abstractions, and integration patterns to tackle complex distributed computing challenges.
- Excel in defining technical requirements, data extraction, data transformation, job automation, productionizing jobs, and exploring new Big Data technologies within a parallel processing environment.
Culture and Mindset
- Be a strategic thinker capable of unconventional, out-of-the-box solutions.
- Maintain an analytical and data-driven approach.
- Possess raw intellect, talent, and energy critical for success.
- Embrace an entrepreneurial and agile mindset, understanding the dynamics of a high-growth private company.
- Demonstrate the ability to lead and be a hands-on contributor.
Qualifications
- Proven track record of relevant work experience and a degree in Computer Science or a related technical field.
- Mandatory experience with functional and object-oriented programming in Python or PySpark.
- Hands-on experience with the Big Data stack such as Hadoop, MapReduce, Spark, HBase, and Elasticsearch.
- Strong understanding of AWS services and experience working with APIs and microservices.
- Effective written and verbal communication skills.
- Ability to collaborate effectively with a diverse team of engineers, data scientists, and product managers.
- Comfort working in a fast-paced startup environment.
Preferred Qualifications
- Experience with agile methodologies.
- Proficiency in database modeling and development, data mining, and warehousing.
- Experience in the architecture and delivery of enterprise-scale applications, including framework development and design patterns.
- Ability to understand and address technical challenges, propose comprehensive solutions, and mentor junior staff.
- Experience handling large, complex datasets from various sources.
