Responsible for development and maintenance of applications with technologies involving Enterprise Java and Distributed technologies.
Responsible for building scalable techniques and processes for data storage, transformation and analysis.
Participate in code and design review of different application modules.
Developing data pipelines in AWS OR Google Cloud.
Follow best practices, guidelines and blueprints for the team.
Minimum 10+ years of experience in Big Data technologies.
Experience in Big Data Ecosystems: Hadoop, Spark, Kafka.
Implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Pyspark, Kafka, any Cloud computing etc.
Proficient understanding of distributed computing principles
Exposure to relational database like Oracle & MySQL is must.
Experience on implementing Machine learning based products.
Department:DBA / Datawarehousing, General / Other SoftwareIndustry:IT – SoftwareSkills:machine learning, mysql, bigdata, kafka, oracle, hadoop, sparkEducationGraduation
Recruiter detailsCompany Name: Tech Mahindra Ltd