Requirements:
- B.E/B.Tech, MCA, M.E/M.Tech graduate with 2-7 years of experience.
- Demonstrated eye for architecture. Candidates should understand the trade-offs between architectural choices, both on a theoretical and applied level.
- Strong coding/debugging/problem-solving abilities with advanced knowledge of Python, Java, or Scala.
- Expertise in distributed big data systems including Hadoop, Hive, Spark, and Kafka streaming.
- Experience with cloud-based data engineering services from GCP/AWS/Azure (e.g., S3, Redshift, Athena, Kinesis).
- Strong understanding of SQL, columnar databases, and data warehousing concepts.
- Experience building scalable data pipelines, implementing ETLs, and developing data lake and data warehouse solutions.
- Familiarity with data visualization tools (Tableau, AWS Quicksight, Looker).
- Deep understanding of data acquisition, ingestion, processing, management, distributed processing, and high availability.
- Prioritization of quality delivery, adhering to industry best practices for performant and scalable data engineering projects.
- Experience owning the delivery of complex data engineering projects, including building data pipelines, data lakes, data warehouses, and ETL solutions.
- Highly innovative, flexible, and self-directed.
- Excellent written and verbal communication skills.
- Adaptability to a rapidly evolving business environment, with a focus on learning new technologies.
- Team player with the belief that the whole is greater than the sum of its parts.
- Passion for technology, ability to switch contexts, enthusiasm to learn, and a passion to perform.
- Demonstrated expertise in team management.
- Ability to articulate business metrics and product value.
- Excellence in the Leads through Example stage of leadership.
- Strong skills in mentoring junior team members.