https://book-success.com/cheap-book-edit-services custom literature review writing service

Junior Big Data Engineer

Junior Big Data Engineer

MetLife

To apply for this job email your details to webipalplacement@gmail.com

Engineer

Job description
Immediate Joiners Preferred

Location:- Hyderabad

Shift Timing:- 1.30 PM to 10.30 PM

Hybrid Model:-3 days WFO and 2 days WFH

Position Summary
Continuing with the tradition of innovation, MetLife, as part of its Data and Analytics function, established a dedicated center for advanced analytics & research in India. DnA Hyderabad a.k.a Global Advanced analytics and Research Center (GARC) is part of larger Data and analytics organization (DnA) of MetLife focused on scaling data governance, data management, data engineering, data science/machine learning/ artificial intelligence, visualization, techno-project management capabilities, enabling a more cost-effective analytics operating model, and increasing data and analytics maturity across the MetLife global community.

Driven by passion and purpose, we are looking for you, high performing data, and analytics professional, to
drive and support development and deployment of actionable, high impact, data, and analytics solutions in
support of MetLifes enterprise functions and lines of businesses across markets. The portfolio of work
delivers data driven solutions across key business functions such as customer acquisition and targeting,
engagement, retention, and distribution, underwriting, claims service & operations, risk management,
investments, audit, and tackling hard, open-ended problems.

The portfolio of work will support deployment of models to various clusters and environments with support/ guidance of Big Data Engineers by following a set of standards and will ensure operational readiness by incorporating configuration management, exception handling and logging for end-to-end batch and real-time model operationalization. The position requires understanding of data engineering, azure devops and atlassian stack, container as a service.

You will work and collaborate with a nimble, autonomous, cross-functional team of makers, breakers, doers,
and disruptors who love to solve real problems and meet real customer needs. You will be using cutting-edge
technologies and frameworks to process data, create data pipelines and collaborate with the data science
team to operationalize end to end machine learning & AI solutions.
Job Responsibilities

Contribute towards supporting the build/implementation of data ingestion and curation processes developed using Big data tools such as Spark (Scala/python), Hive, HDFS, Kafka, Pig, Spark, HDFS, Oozie, Sqoop, Flume, Zookeeper, Kerberos, Sentry, Impala, CDP 7.x etc. under the guidance of Big Data Engineers.
Support the ingestion of huge data volumes from various platforms for Analytics needs and prepare high-performance, reliable, and maintainable ETL code with support/review guidance of senior team members.
Provide relevant support in monitoring performance and advising any necessary infrastructure changes to senior team members for their review inputs.
Understand the defined data security principals and policies developed using Ranger and Kerberos.
Gain broader understanding on how to support application developers and progressively work on efficient big data application development using cutting edge technologies.
Collaborate with Business systems analyst, technical leads, Project managers and business/operations teams in building data enablement solutions across different LOBs and use cases basis
Understand & support the creation of reusable frameworks which will optimize the development effort involved.

Knowledge, Skills and Abilities
Education

Bachelor’s degree in Computer Science, Engineering, or related discipline
Experience

1 3 years of solutions development and delivery experience.
Hive database management and Performance tuning (Partitioning / Bucketing).
Good SQL knowledge and data analysis skills for data anomaly detection and data quality assurance.
Basic experience of building stream-processing systems, using solutions such as Storm or Spark-Streaming.
Able to support build/design of Data warehouses, data stores for analytics consumption On prem/ Cloud (real time as well as batch use cases) as per guidance and help from Sr. Big Data Engineer/Big Data Engineer.
Experience in any model management methodologies is a plus.
Knowledge and skills (general and technical)Required:

Proficiency with hands on development experience on some of the key tools: HDFS, Hive, Spark, Scala, Java, Python, Databricks/Delta Lake, Flume, Kafka etc.
Analytical skills to analyze situations and come to optimal and efficient solution based on requirements.
Performance tuning and problem-solving skills.
Able to support/contribute towards designing of multi-tenant, containerized Hadoop architecture for memory/CPU management/sharing across different LOBs under the guidance of senior data engineers in the team.
Code versioning experience using Bitbucket.
Good communication skills both written and verbal.
Proficiency in project documentation preparation and other support as required.

Additional skills (good to have):

Experience in Python and writing Azure functions using Python/Node.js
Experience using Event Hub for data integrations.
Supporting implementation of analytical data stores in Azure platform using ADLS/Azure data factory /Data Bricks and Cosmos DB (mongo/graph API)
Proficiency in using tools like Git, Bamboo and other continuous integration and deployment tools.
Exposure to data governance principles such as Metadata, Lineage (Colibra /Atlas) etc.
Other Requirements (licenses, certifications, specialized training – if required)

Experience of /exposure to Data Science projects