Egen Logo

Egen

Senior Data Engineer – Python & AWS

Sorry, this job was removed at 02:23 p.m. (IST) on Tuesday, Feb 17, 2026
Be an Early Applicant
Hybrid
Hyderabad, Telangana
Hybrid
Hyderabad, Telangana

Similar Jobs

4 Hours Ago
In-Office
4 Locations
Senior level
Senior level
Artificial Intelligence • Big Data • Healthtech • Information Technology • Machine Learning • Software • Analytics
Write and edit external-facing executive summaries, cover letters, and strategic RFP/RFI responses for high-value healthcare opportunities. Collaborate with sales, SMEs, and proposal teams, manage multiple projects, coach junior writers, maintain proposal database content, and ensure consistent, persuasive, customer-focused messaging aligned with sales strategy.
Top Skills: MS OfficeRfpioTeams
4 Hours Ago
In-Office
Hyderabad, Telangana, IND
Senior level
Senior level
Artificial Intelligence • Big Data • Healthtech • Information Technology • Machine Learning • Software • Analytics
Lead and coordinate Six Sigma projects across business teams: define scope, manage risks and vendors, analyze data to solve issues, track/report portfolio performance, present to leadership, and ensure on-time delivery.
Top Skills: ExcelMs ProjectPowerPointSharepointSix SigmaVisio
4 Hours Ago
In-Office
4 Locations
Mid level
Mid level
Artificial Intelligence • Big Data • Healthtech • Information Technology • Machine Learning • Software • Analytics
Build and maintain scalable data pipelines and backend services with Python and Databricks, create responsive React front ends, integrate APIs and data visualizations, collaborate with cross-functional teams, participate in CI/CD and code reviews, and support operationalization of ML models.
Top Skills: SparkAWSAzureDatabricksGCPGitJavascript (Es6+)JSONPythonReactRestful Apis
Job Overview:
We are looking for an experienced Senior Data Engineer to design, build, and optimize scalable, high-performance data platforms using AWS cloud services and Python. The ideal candidate will play a key role in architecting end-to-end data pipelines, driving automation, ensuring data quality, and enabling analytics and AI workloads across the organization.
This role requires deep technical expertise in AWS data services, modern data architecture, and a passion for delivering reliable, high-quality data solutions at scale.

Key Responsibilities

  • Architect and implement scalable, fault-tolerant data pipelines using AWS Glue, Lambda, EMR, Step Functions, and Redshift
  • Build and optimize data lakes and data warehouses on Amazon S3, Redshift, and Athena
  • Develop Python-based ETL/ELT frameworks and reusable data transformation modules
  • Integrate multiple data sources (RDBMS, APIs, Kafka/Kinesis, SaaS systems) into unified data models
  • Lead efforts in data modeling, schema design, and partitioning strategies for performance and cost optimization
  • Drive data quality, observability, and lineage using AWS Data Catalog, Glue Data Quality, or third-party tools
  • Define and enforce data governance, security, and compliance best practices (IAM policies, encryption, access control)
  • Collaborate with cross-functional teams (Data Science, Analytics, Product, DevOps) to support analytical and ML workloads
  • Implement CI/CD pipelines for data workflows using AWS CodePipeline, GitHub Actions, or Cloud Build
  • Provide technical leadership, code reviews, and mentoring to junior engineers
  • Monitor data infrastructure performance, troubleshoot issues, and lead capacity planning

Required Skills & Qualifications

  • Bachelor’s or Master’s degree in Computer Science, Information Systems, or related field
  • 5–10 years of hands-on experience in data engineering or data platform development
  • Expert-level proficiency in Python (pandas, PySpark, boto3, SQLAlchemy)
  • Advanced experience with AWS Data Services, including:
  • AWS Glue, Lambda, EMR, Step Functions, DynamoDB, EDW Redshift, Athena, S3, Kinesis, Amazon Quicksight.
  • IAM, CloudWatch, CloudFormation / Terraform (for infrastructure automation)
  • Strong experience in SQL, data modeling, and performance tuning
  • Proven ability to design and deploy data lakes, data warehouses, and streaming solutions
  • Solid understanding of ETL best practices, partitioning, error handling, and data validation
  • Hands-on experience in version control (Git) and CI/CD for data pipelines
  • Knowledge of containerization (Docker/Kubernetes) and DevOps concepts
  • Excellent analytical, debugging, and communication skills

Preferred Skills

  • Experience with Apache Spark or PySpark on AWS EMR or Glue
  • Familiarity with Airflow, dbt, or Dagster for workflow orchestration
  • Exposure to real-time data streaming (Kafka, Kinesis Data Streams, or Firehose)
  • Knowledge of Lake Formation, Glue Studio, or DataBrew
  • Experience integrating with machine learning and analytics platforms (SageMaker, QuickSight)
  • Certification: AWS Certified Data Analytics – Specialty or AWS Certified Solutions Architect

Soft Skills

  • Strong ownership mindset with focus on reliability and automation
  • Ability to mentor and guide data engineering teams
  • Effective communication with both technical and non-technical stakeholders

What you need to know about the Hyderabad Tech Scene

Because of its proximity to leading research institutions and a government committed to the city's growth, Hyderabad's tech scene is booming. With plans to establish India's first "AI city," the city is on track to become one of the world's most anticipated tech hubs, with companies like TransUnion, Schrödinger and Freshworks, among others, already calling the city home.

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account