Job Description
Job Profile
- Translate business propositions into quantitative queries, collect and clean the necessary data
- Development and deployment of Spark applications for customer channels with Java and Scala
- Build scalable ETL pipelines with Apache NiFi, Kafka connect fetching data from multiple sources that’s capable of processing 30TB of data daily to Hadoop Data Lake and relational data warehouses.
- Development of feature store for Data Scientists and Machine Learners Models to using Spark MLiB and Spark SQL using Java.
- Deploy and maintain Kafka cluster that’s serves companywide real-time use cases.
- Build tools for Kafka monitoring alerts and easier management
- Develop customer processors using Apache NiFi and integrate with legacy systems e.g. SMSC to help in data pipelining from all systems.
- Provide support to Big Data ecosystem.
Minimum Requirement
Minimum 3 years of experience in Big Data, NiFI, Kafka, Spark Applications