Wells Fargo Logo

Wells Fargo

Senior Software Engineer

Posted 2 Days Ago
Be an Early Applicant
Hybrid
Hyderabad, Telangana
Senior level
Hybrid
Hyderabad, Telangana
Senior level
Wells Fargo is seeking a Senior Software Engineer - Data Engineering to join the CALM (Corporate Asset and Liability Management) Data Engineering team within the Enterprise Functions Technology (EFT) organization. In this role, you will be responsible for designing, developing, optimizing, and maintaining metadata-driven, scalable, high-performance data engineering frameworks that power critical financial risk processes across Corporate Treasury.
You will work independently to build resilient data pipelines, APIs, wrappers, and supporting components to enable reliable data ingestion, transformation, validation, and delivery across cloud and on-prem ecosystems. This position plays a key role in Data Center exit migrations, DPC onboarding, and enterprise-wide modernization initiatives.
The role requires deep technical expertise, hands-on problem-solving, and technical leadership in distributed data engineering, cloud platforms, data quality, and performance engineering.
In this role, you will:
  • Lead moderately complex initiatives and deliverables within technical domain environments
  • Contribute to large scale planning of strategies
  • Design, code, test, debug, and document for projects and programs associated with technology domain, including upgrades and deployments
  • Review moderately complex technical challenges that require an in-depth evaluation of technologies and procedures
  • Resolve moderately complex issues and lead a team to meet existing client needs or potential new clients needs while leveraging solid understanding of the function, policies, procedures, or compliance requirements
  • Collaborate and consult with peers, colleagues, and mid-level managers to resolve technical challenges and achieve goals
  • Lead projects and act as an escalation point, provide guidance and direction to less experienced staff
Required Qualifications:
  • 4+ years of Software Engineering experience, or equivalent demonstrated through one or a combination of the following: work experience, training, military experience, education
Desired Qualifications:
  • Design, build, test, deploy, and maintain large-scale structured and unstructured data pipelines using Python, SQL, Apache Spark, and modern data lake/lakehouse technologies.
  • Develop and optimize metadata-driven pipelines, wrappers, ingestion frameworks, and validation layers to support CALM data workflows.
  • Build and maintain high-quality ELT/ETL pipelines following best practices in reliability, performance, observability, and reusability.
  • Engineer and optimize Spark pipelines for large-scale batch and streaming workloads (partitioning, caching, Catalyst optimization, AQE, Tungsten).
  • Work with open table formats such as Iceberg, Delta, or Hudi for versioned data, time-travel, compaction, and schema evolution.
  • Implement Medallion (Bronze/Silver/Gold) architecture patterns for modern lakehouse systems.
  • Implement automated data quality frameworks using tools such as Great Expectations / Deequ or custom validators.
  • Build data health monitoring frameworks with SLAs/SLOs, anomaly detection, and lineage capture.
  • Ensure strong validation layers during Data Center exits, migration programs, and DPC onboarding.
  • Build RESTful and metadata APIs using Python frameworks (FastAPI/Flask) to enable secure, governed data access.
  • Collaborate with application teams to integrate data access patterns and platform services.
  • Design and deploy data pipelines in cloud platforms (AWS, Azure, GCP) leveraging managed compute, orchestration, and storage.
  • Build CI/CD workflows and infrastructure automation using Jenkins, GitHub Actions, Azure DevOps, Terraform, or Helm.
  • Apply secure engineering principles including IAM, secrets management, encryption standards, and network controls.
  • Build resilient orchestration flows using Autosys or equivalent tools.
  • Apply modular design with retries, alerts, SLAs, and workflow dependency management.
  • Work with cross-functional Agile teams (Product, Architecture, QA, Treasury SMEs).
  • Analyze technical requirements, evaluate design alternatives, and provide recommendations aligned with enterprise standards.
  • Independently deliver complex engineering tasks and contribute to architecture/roadmap discussions.
Job Expectations:
  • Hands-on experience with Python, SQL, and bash scripting for automation.
  • Strong experience building big data pipelines using Apache Spark, Hive, Hadoop.
  • Experience with Autosys/Airflow or similar orchestration tools.
  • Working knowledge of REST APIs, Object Storage, Dremio, and CI/CD pipelines.
  • Strong troubleshooting and problem-solving capabilities.
  • Solid foundation in data modeling (conceptual/logical/physical) and database design.
  • Experience architecting pipelines using distributed systems patterns (shuffle optimization, spill, broadcast, storage layouts).
  • Experience with streaming frameworks like Spark Structured Streaming or Apache Flink.
  • Hands-on with optimization techniques: clustering, Z-ordering, vectorized IO (Parquet/ORC), compaction.
  • Experience implementing Medallion architectures and governed ingestion zones.
  • Knowledge of data governance platforms (Collibra, Alation, Purview).
  • Understanding of financial data controls, validation rules, reconciliation checks, and compliance (SOX/PCI).
  • Experience implementing lineage, observability, drift detection.
  • Cloud-native engineering experience - serverless, managed Spark, event-driven architectures.
  • Familiarity with containerization (Docker, K8s) and workflow operators.
  • Strong experience implementing test automation for data pipelines (unit, contract, integration tests).
  • Applying GenAI for metadata extraction, data anomaly detection, automated documentation, or pipeline optimization.
  • Exposure to financial risk, treasury functions, or Asset & Liability Management (ALM) processes.
  • Deliver high-quality engineering outcomes during Data Center exit migrations and DPC onboarding, ensuring validations, automation, and production readiness.
  • Collaborate with cross-functional teams to build scalable, high-performance data solutions using Python, SQL, Spark, Iceberg, Dremio, and Autosys.
Posting End Date:
23 Feb 2026
*Job posting may come down early due to volume of applicants.
We Value Equal Opportunity
Wells Fargo is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other legally protected characteristic.
Employees support our focus on building strong customer relationships balanced with a strong risk mitigating and compliance-driven culture which firmly establishes those disciplines as critical to the success of our customers and company. They are accountable for execution of all applicable risk programs (Credit, Market, Financial Crimes, Operational, Regulatory Compliance), which includes effectively following and adhering to applicable Wells Fargo policies and procedures, appropriately fulfilling risk and compliance obligations, timely and effective escalation and remediation of issues, and making sound risk decisions. There is emphasis on proactive monitoring, governance, risk identification and escalation, as well as making sound risk decisions commensurate with the business unit's risk appetite and all risk and compliance program requirements.
Candidates applying to job openings posted in Canada: Applications for employment are encouraged from all qualified candidates, including women, persons with disabilities, aboriginal peoples and visible minorities. Accommodation for applicants with disabilities is available upon request in connection with the recruitment process.
Applicants with Disabilities
To request a medical accommodation during the application or interview process, visit Disability Inclusion at Wells Fargo .
Drug and Alcohol Policy
Wells Fargo maintains a drug free workplace. Please see our Drug and Alcohol Policy to learn more.
Wells Fargo Recruitment and Hiring Requirements:
a. Third-Party recordings are prohibited unless authorized by Wells Fargo.
b. Wells Fargo requires you to directly represent your own experiences during the recruiting and hiring process.

Top Skills

Airflow
Alation
Apache Flink
Spark
Autosys
AWS
Azure
Azure Devops
Bash
Collibra
Deequ
Delta
Docker
Dremio
Fastapi
Flask
GCP
Github Actions
Great Expectations
Hadoop
Helm
Hive
Hudi
Iceberg
Jenkins
Kubernetes
Object Storage
Orc
Parquet
Purview
Python
Rest Apis
Spark Structured Streaming
SQL
Terraform

Wells Fargo Hyderabad, Telangana, IND Office

: C9FH+X2J, Divyasree Orion Rd, Madhura Nagar Colony, Gachibowli, Hyderabad, Telangana, India, 500032

Similar Jobs at Wells Fargo

3 Days Ago
Hybrid
Hyderabad, Telangana, IND
Senior level
Senior level
Fintech • Financial Services
Lead design, development, and delivery of enterprise microservices and responsive front-end apps. Implement event-driven architectures with Kafka, integrate GenAI/LLM features, deploy on OpenShift, ensure data scalability and CI/CD, and mentor junior engineers while meeting security and compliance requirements.
Top Skills: Java,Spring Boot,Microservices,React,Javascript,Typescript,Kafka,Red Hat Openshift (Ocp),Genai,Llm,Sql,Rdbms,Nosql,Distributed Storage,Caching,Ci/Cd,Git,Testing Frameworks,Devops
3 Days Ago
Hybrid
Hyderabad, Telangana, IND
Senior level
Senior level
Fintech • Financial Services
Lead design, development, and deployment of enterprise-grade microservices and frontend applications. Build event-driven systems with Kafka, integrate GenAI/LLM features, deploy on OpenShift, and support CI/CD, testing, and production operations while mentoring junior engineers and ensuring compliance.
Top Skills: Java,Spring Boot,Microservices,React,Javascript,Typescript,Kafka,Genai/Llm,Red Hat Openshift (Ocp),Rdbms,Sql,Nosql,Distributed Storage,Containerization,Ci/Cd,Git,Testing Frameworks,Devops,Caching
3 Days Ago
Hybrid
Hyderabad, Telangana, IND
Senior level
Senior level
Fintech • Financial Services
Lead design and delivery of a cloud-native, microservices-based enterprise application. Guide 1-2 scrum teams, define architecture and coding standards, develop and test services, troubleshoot complex issues, collaborate with product and stakeholders, and ensure reliable, secure, scalable deployments and delivery.
Top Skills: Java,Python,Kafka,Avro,Object Storage,Jpa,Spring,Hibernate,Postgresql,Pcf,Tkgi,Kubernetes,Angular,React,Apigee,Liquibase,Harness,Grafana,Appdynamics,Junit,Karate,Cucumber,Tdd,Bdd,Blue-Green Deployments,Feature Toggles,Circuit Breakers,Linux,Windows,Jira,Confluence,Microservices

What you need to know about the Hyderabad Tech Scene

Because of its proximity to leading research institutions and a government committed to the city's growth, Hyderabad's tech scene is booming. With plans to establish India's first "AI city," the city is on track to become one of the world's most anticipated tech hubs, with companies like TransUnion, Schrödinger and Freshworks, among others, already calling the city home.

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account