NationsBenefits Logo

NationsBenefits

Data Engineer

Sorry, this job was removed at 10:13 a.m. (IST) on Thursday, Jan 29, 2026
In-Office
Hyderabad, Telangana
In-Office
Hyderabad, Telangana

Similar Jobs

5 Days Ago
Remote or Hybrid
India
Mid level
Mid level
Fintech • Professional Services • Consulting • Energy • Financial Services • Cybersecurity • Generative AI
The GEN BI Engineer will design and deliver innovative BI solutions, collaborate with stakeholders, and build scalable systems using data for strategic decisions.
Top Skills: BigQueryGCPPower BIPythonSQL
8 Days Ago
Easy Apply
Hybrid
Hyderabad, Telangana, IND
Easy Apply
Mid level
Mid level
Artificial Intelligence • Big Data • Cloud • Security • Software • Cybersecurity • Infrastructure as a Service (IaaS)
The Data Engineer will build and maintain ELT pipelines, develop dbt models, troubleshoot issues, and support ingestion from SaaS systems. This role requires collaborating with a distributed team and participating in code reviews to ensure quality practices.
Top Skills: DbtFivetranPythonSaaSSnowflakeSQL
Yesterday
Remote or Hybrid
India
Senior level
Senior level
Fintech • Information Technology • Insurance • Financial Services • Big Data Analytics
Lead the design, development, and maintenance of Disability & Absence products at MetLife, focusing on Big Data technologies and solutions.
Top Skills: HbaseHiveKafkaNoSQLPigPythonShell ScriptSolrSpark
About NationsBenefits:

At NationsBenefits, we are leading the transformation of the insurance industry by developing innovative benefits management solutions. We focus on modernizing complex back-office systems to create scalable, secure, and high-performing platforms that streamline operations for our clients. As part of our strategic growth, we are focused on platform modernization — transitioning legacy systems to modern, cloud-native architectures that support the scalability, reliability, and high performance of core back- office functions in the insurance domain.

As a Data Engineer, you will be responsible for the Requirement Gathering, Data Analysis, Development and implementation of Orchestrated data pipeline solutions to support our organization's data-driven initiatives to ensure data accuracy and enable data-driven decision-making across the organization. The ideal candidate will possess a minimum of 3-5 years of hands-on experience in data engineer on high-performing teams. Expertise in DBT, Airflow, Azure Databricks, SQL, Python, Py-spark, Automation is a must and knowledge of reporting tools is addon.

Key Responsibilities:

  • 3 to 5 years of hands-on experience using DBT. Airflow, Azure Databricks, Python, Py-spark and SQL , Preferred from Healthcare & Fintech Domain having Automation First Mindset.
  • Hands-on experience with Data Collection, Data Analysis, Data modeling, Data Processing using DBT, Airflow, Azure Databricks, Py-spark, SQL, Python.
  • Performance Optimization and Automation: Continuously monitor and optimize existing solutions and Debugging DAG failures and resolving.
  • Data Processing: Leverage his expertise building Robust Data pipelines using mentioned tech stack with CI/CD.
  • Collaboration: Collaborate with cross-functional teams, including data scientists, business analysts, and stakeholders, to understand their data needs and deliver solutions.
  • Data Quality: Implement data validation and cleansing processes to ensure data accuracy, consistency, and reliability.
  • Influence: bring right solution for use cases and convince the team to use.
  • Open to Ad hoc Data Analysis and Reporting/Dashboard Development: Perform exploration data analysis, develop data visualizations, and generate actionable insights to support business decision-making.
  • Stay Current: Stay up to date with emerging trends and technologies in data engineering and analytics and make recommendations for their adoption.

Requirements:

  •  Bachelor’s degree in computer science, Information Technology, or a related field.
  •  Minimum 3+ years of hands-on experience using DBT. Airflow, Azure Databricks, Py-spark, SQL, Python, Automation
  •  Flexible to build Data Reports and Dashboards using SQL, Python, Reporting Tools
  •   Strong Debugging and Automation skills
  •  Strong understanding of DWH/Data Lake concepts and methodologies.
  • Experience with cloud platforms such as Azure, AWS or GCP
  • Excellent communication, Presentation and interpersonal skills
  • Knowledge of data quality, data Validation, data security and compliance standards is a plus.
  • Excellent problem-solving skills and attention to detail

What you need to know about the Hyderabad Tech Scene

Because of its proximity to leading research institutions and a government committed to the city's growth, Hyderabad's tech scene is booming. With plans to establish India's first "AI city," the city is on track to become one of the world's most anticipated tech hubs, with companies like TransUnion, Schrödinger and Freshworks, among others, already calling the city home.

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account