Nacre Capital Logo

Nacre Capital

Data Engineer

Posted 20 Days Ago
Be an Early Applicant
5 Locations
Mid level
5 Locations
Mid level
The Data Engineer will develop and optimize data pipelines to support machine learning research. Responsibilities include designing scalable data architectures, collaborating with ML researchers, transforming data for experiments, and managing data integration processes using Python and SQL, along with NoSQL databases.
The summary above was generated by AI

Description

About Us:

Aquaticode builds artificial intelligence solutions for aquaculture. Our core competency

lies at the intersection of biology and artificial intelligence, utilizing specialized imaging

technology to detect, identify, and predict traits of aquatic species. We value

commitment and creativity in building real-world solutions that benefit humanity.

Requirements

We are seeking a talented Data Engineer with experience in supporting Machine Learning (ML)

research to join our team. The ideal candidate will have a strong background in building robust

data pipelines and workflows that facilitate ML projects and eagerness to learn new

technologies. This role requires proficiency in data processing technologies and an

understanding of the data needs specific to ML research.

Key Responsibilities:

· Develop, maintain, and optimize data pipelines and workflows to support ML research and model development.

· Design and implement scalable data architectures for handling large datasets used in ML models.

· Collaborate closely with ML researchers and data scientists to understand data requirements and ensure data availability and quality.

· Work with databases and data integration processes to prepare and transform data for ML experiments.

· Utilize MongoDB and other NoSQL databases to manage unstructured and semi-structured data.

· Write efficient, reliable, and maintainable code in Python and SQL for data processing tasks.

· Implement data validation and monitoring systems to ensure data integrity and performance.

· Support the deployment of ML models by integrating data solutions into production environments.

· Ensure the scalability and performance of data systems through rigorous testing and optimization.

Required Skills & Qualifications:

· Proficiency in English (spoken and written).

· Strong experience in Python and SQL.

· Hands-on experience with data processing in Apache Airflow.

· Experience working with databases, including MongoDB (NoSQL) and relational databases.

· Understanding of data modeling, ETL processes, and data warehousing concepts.

· Experience with cloud platforms like AWS, GCP, or Azure.

Good to Have:

· Experience with other NoSQL databases like InfluxDB, Elasticsearch, or similar technologies.

· Experience with backend frameworks like FastAPI, Flask, or Django.

· Knowledge of containerization tools like Docker.

· Familiarity with messaging queues like RabbitMQ.

· Understanding of DevOps practices and experience with CI/CD pipelines.

· Experience with front-end development (e.g., React, NextJs).

About Nacre Capital:

We were founded by Nacre Capital, a venture builder focused on AI within the life

sciences. Nacre has an impressive track record in creating, building, and growing deep

tech startups, including Face.com (acquired by Facebook), Fairtility, FDNA, and Seed-X.

Top Skills

Python
SQL

Similar Jobs

22 Days Ago
28 Locations
Remote
242 Employees
Senior level
242 Employees
Senior level
Artificial Intelligence • Machine Learning • Natural Language Processing • Conversational AI
The Data Engineer will architect and lead the transition from on-premises batch architecture to a cloud-based streaming data platform, focusing on data warehousing, ETL pipelines, and data governance. The role involves collaboration with cross-functional teams and fostering a culture of technical excellence.
Be an Early Applicant
Yesterday
Gdańsk, Pomorskie, POL
Hybrid
4,700 Employees
Mid level
4,700 Employees
Mid level
Artificial Intelligence • Big Data • Cloud • Information Technology • Software • Big Data Analytics • Automation
As a Senior DevOps Engineer, you will automate the Data Platform at Dynatrace, focusing on deployment integrations, cloud solutions, ETL processes, and CI/CD pipelines. You’ll work with AWS and Azure to enhance data reliability and efficiency while collaborating with international teams.
Be an Early Applicant
4 Days Ago
Kraków, Małopolskie, POL
Remote
Hybrid
300 Employees
Senior level
300 Employees
Senior level
AdTech • Enterprise Web • Information Technology • Machine Learning • Marketing Tech • Sales
The Big Data Engineer IV will design large-scale data processing systems, work closely with product teams to own projects, and improve application efficiency and stability. They will handle billions of ad requests, intending to extract business value and enhance the advertising experience.

What you need to know about the Hyderabad Tech Scene

Because of its proximity to leading research institutions and a government committed to the city's growth, Hyderabad's tech scene is booming. With plans to establish India's first "AI city," the city is on track to become one of the world's most anticipated tech hubs, with companies like TransUnion, Schrödinger and Freshworks, among others, already calling the city home.

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account