Friday, January 1, 2021

Srijan Technologies - Big Data Engineer/Lead/Architect - Java/Python (2-14 yrs) (Srijan Technologies Pvt Ltd)

Big Data Engineer : 2-4 Yrs of Experience

Big Data Lead - 5-8 Yrs

Big Data Architect - 8+ Yrs

Responsibilities include :

- Develop scalable and reliable data solutions to move data across systems from multiple sources in real time as well as batch modes (Kafka)

- Build Producer and Consumer applications on Kafka, and appropriate Kafka configurations

- Designing, writing, and operationalizing new Kafka Connectors using the framework

- Accelerate adoption of the Kafka ecosystem by creating a framework for leveraging technologies such as Kafka Connect, KStreams/KSQL, Schema Registry, and other streaming-oriented technology

- Implement Stream processing using Kafka Streams / KSQL / Spark Jobs along with Kafka

- Bring forward ideas to experiment and work in teams to transform ideas to reality

- Architect data structures that meet the reporting timelines

- Work directly with engineering teams for design and build their development requirements

- Maintain high standards of software quality by establishing good practices and habits within the development team while delivering solutions on time and on budget.

- Proven communication skills, both written and oral

- Demonstrated ability to quickly learn new tools and paradigms to deploy cutting edge solutions

- Create large scale deployments using newly conceptualized methodologies

Skills :

- Proven hands-on experience with Kafka is a must

- Proven hands-on experience with Hadoop stack (HDFS, Map Reduce, Spark)

- Core development experience in one or more of these languages: Java, Python / PySpark, Scala etc.

- Good experience in in developing Producers and Consumers for Kafka as well as custom Connectors for Kafka

- 2+ plus years of developing applications using Kafka (Architecture), Kafka Producer and Consumer APIs, Real-time Data pipelines/Streaming

- 2 plus years of experience performing Configuration and fine-tuning of Kafka for optimal production performance

- Experience in using Kafka APIs to build producer and consumer applications, along with expertise in implementing KStreams components. Have developed KStreams pipelines, as well as deployed KStreams clusters

- Strong knowledge of the Kafka Connect framework, with experience using several connector types: HTTP REST proxy, JMS, File, SFTP, JDBC, Splunk, Salesforce, and how to support wire-format translations. Knowledge of connectors available from Confluent and the community

- Experience with developing SQL queries and best practices of using KSQL vs KStreams will be an added advantage

- Expertise with Hadoop ecosystem, primarily Spark, Kafka, Nifi etc.

- Experience with integration of data from multiple data sources

- Experience with stream-processing systems: Storm, Spark-Streaming, etc. will be ad advantage

- Experience with relational SQL and NoSQL databases, one or more of DBs like Postgres, Cassandra, HBase, Cassandra, MongoDB etc.

- Experience with AWS cloud services like S3, EC2, EMR, RDS, Redshift will be an added advantage

- Excellent in Data structures & algorithms and good in analytical skills

- Strong communication skills

- Ability to work with and collaborate across the team

- A good "can do" attitude

Apply Now

No comments:

Post a Comment