NationsBenefits Logo

NationsBenefits

Data Engineer

Sorry, this job was removed at 10:13 a.m. (IST) on Thursday, Jan 29, 2026
In-Office
Hyderabad, Telangana
In-Office
Hyderabad, Telangana

Similar Jobs

7 Days Ago
Remote or Hybrid
India
Senior level
Senior level
Fintech • Professional Services • Consulting • Energy • Financial Services • Cybersecurity • Generative AI
Design, build, and operate scalable ETL/ELT pipelines using PySpark and AWS data services. Orchestrate workflows with Apache Airflow, implement AWS Glue jobs and Data Catalog, manage Lake Formation permissions, publish datasets for BI, and deliver QuickSight visualizations while ensuring data quality and performance.
Top Skills: Pyspark,Apache Airflow,Aws Glue,Aws Lake Formation,Aws Glue Data Catalog,Amazon Quicksight
15 Days Ago
Remote or Hybrid
India
Mid level
Mid level
Fintech • Information Technology • Insurance • Financial Services • Big Data Analytics
No role-specific responsibilities provided in the posting. Typically, a Unit Manager - Data Governance leads the data governance function: defining and enforcing data policies, managing data quality and metadata, coordinating stakeholders, ensuring regulatory compliance, and supervising a team to implement governance processes.
16 Days Ago
In-Office
5 Locations
Entry level
Entry level
Artificial Intelligence • Big Data • Healthtech • Information Technology • Machine Learning • Software • Analytics
Design, develop, and maintain high-performance ETL solutions using SSIS and T-SQL; integrate diverse data sources; use C# and Regex for transformations; monitor QA/UAT/Production ETL jobs; implement data quality and DevOps pipelines; collaborate with stakeholders and improve ETL best practices.
Top Skills: Ssis,T-Sql,Sql,Ssms,Snowflake,Databricks,Kafka,C#,Regex,Tfs,Github,Apis,Ftp,Csv,Json,Xml,Parquet,Devops
About NationsBenefits:

At NationsBenefits, we are leading the transformation of the insurance industry by developing innovative benefits management solutions. We focus on modernizing complex back-office systems to create scalable, secure, and high-performing platforms that streamline operations for our clients. As part of our strategic growth, we are focused on platform modernization — transitioning legacy systems to modern, cloud-native architectures that support the scalability, reliability, and high performance of core back- office functions in the insurance domain.

As a Data Engineer, you will be responsible for the Requirement Gathering, Data Analysis, Development and implementation of Orchestrated data pipeline solutions to support our organization's data-driven initiatives to ensure data accuracy and enable data-driven decision-making across the organization. The ideal candidate will possess a minimum of 3-5 years of hands-on experience in data engineer on high-performing teams. Expertise in DBT, Airflow, Azure Databricks, SQL, Python, Py-spark, Automation is a must and knowledge of reporting tools is addon.

Key Responsibilities:

  • 3 to 5 years of hands-on experience using DBT. Airflow, Azure Databricks, Python, Py-spark and SQL , Preferred from Healthcare & Fintech Domain having Automation First Mindset.
  • Hands-on experience with Data Collection, Data Analysis, Data modeling, Data Processing using DBT, Airflow, Azure Databricks, Py-spark, SQL, Python.
  • Performance Optimization and Automation: Continuously monitor and optimize existing solutions and Debugging DAG failures and resolving.
  • Data Processing: Leverage his expertise building Robust Data pipelines using mentioned tech stack with CI/CD.
  • Collaboration: Collaborate with cross-functional teams, including data scientists, business analysts, and stakeholders, to understand their data needs and deliver solutions.
  • Data Quality: Implement data validation and cleansing processes to ensure data accuracy, consistency, and reliability.
  • Influence: bring right solution for use cases and convince the team to use.
  • Open to Ad hoc Data Analysis and Reporting/Dashboard Development: Perform exploration data analysis, develop data visualizations, and generate actionable insights to support business decision-making.
  • Stay Current: Stay up to date with emerging trends and technologies in data engineering and analytics and make recommendations for their adoption.

Requirements:

  •  Bachelor’s degree in computer science, Information Technology, or a related field.
  •  Minimum 3+ years of hands-on experience using DBT. Airflow, Azure Databricks, Py-spark, SQL, Python, Automation
  •  Flexible to build Data Reports and Dashboards using SQL, Python, Reporting Tools
  •   Strong Debugging and Automation skills
  •  Strong understanding of DWH/Data Lake concepts and methodologies.
  • Experience with cloud platforms such as Azure, AWS or GCP
  • Excellent communication, Presentation and interpersonal skills
  • Knowledge of data quality, data Validation, data security and compliance standards is a plus.
  • Excellent problem-solving skills and attention to detail

What you need to know about the Hyderabad Tech Scene

Because of its proximity to leading research institutions and a government committed to the city's growth, Hyderabad's tech scene is booming. With plans to establish India's first "AI city," the city is on track to become one of the world's most anticipated tech hubs, with companies like TransUnion, Schrödinger and Freshworks, among others, already calling the city home.

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account