R
Response Informatics•2h ago
Foundit
Streaming Data engineer
Delhi, Hyderabad / Secunderabad, Telangana
Full Time
Mid Level
300000-550000
N/A
N/A
N/A
Responsibilities
Qualifications & Requirements
Experience Level: Mid Level
Full Job Description
Streaming Data Engineer Role at Response Informatics in Delhi & Hyderabad
We are seeking a skilled Streaming Data Engineer to join our dynamic team. In this role, you will be responsible for developing and optimizing data pipelines, ensuring efficient data flow and processing. This is a permanent position offering a competitive salary and a great opportunity to work with cutting-edge technologies.
Key Responsibilities:
- Develop robust code solutions for assigned data engineering scenarios.
- Apply in-depth knowledge of Spark-related queries for data manipulation and analysis.
- Demonstrate strong Flink coding proficiency and experience troubleshooting pipeline issues.
- Utilize Core Python for tasks such as applying validation rules to CSV files, string comparisons, managing collections, and implementing basic constructs.
- Optimize Spark performance and effectively use Spark Submit commands.
- Exhibit excellent SQL skills, including experience with join operations, aggregate functions, and window functions.
- Communicate technical concepts and solutions effectively.
- Understand the fundamentals of streaming data pipelines.
- Gain a solid grasp of Spark Sessions, streaming processing, and transformations of real-time data.
- Leverage experience with Spark Streaming, Kafka, and Hive, or any equivalent streaming technology.
- Build foundational knowledge in Spark and Hive concepts, including Sessions and Context.
- Showcase hands-on project experience with Spark Streaming or Flink Streaming integrated with Kafka.
- Familiarity with Azure Cloud services is a plus.
Evaluation Focus:
Candidates may be evaluated on their understanding of core streaming concepts, including:
- Differentiating between Spark Streaming sessions and Batch Sessions.
- Explaining Spark Structured Streaming.
- The functionality and usage of the
spark.readStream()method for reading data in Spark Streaming applications. - The purpose and various arguments of the
writeStreammethod, such asformat,location, andwriteMode, for writing DataFrames to sinks. - Identifying the action required to initiate reading data from a Kafka queue (e.g.,
start()). - Demonstrating how to print the output of a streaming operation to the terminal using the console format.
Company
R
Response Informatics
Response Informatics is a globally recognized leader in end-to-end technology and enterprise-level management consulting. With corporate offices in New Jersey, USA, and Hyderabad, India, we have been ...
Delhi, Hyderabad / Secunderabad, Telangana
Posted on Foundit