The Role
·40% Data Architecture — Partner with product and software engineering teams to design scalable data models, define architecture standards, and ensure clean data flows from source to consumption.
·50% Data Engineering — Build and optimize ETL/ELT pipelines, write performant SQL and transformation logic, and maintain data quality across TB-scale datasets.
·10% Infrastructure Reliability — Monitor platform uptime, troubleshoot incidents, and keep our cloud data infrastructure healthy.
What You’ll Do
· Partner with software engineers and product teams to design data modelling, schema design, and platform architecture.
·Design and maintain scalable data pipelines across our AWS (S3, Lambda) and SingleStore environment.
·Write efficient, optimized SQL for analytics, reporting, and data product use cases.
·Define architecture standards, naming conventions, and documentation practices for the data layer.
·Help shape a coherent, well-structured data layer that can serve as the foundation for future AI and ML initiatives.
·Monitor infrastructure health, respond to incidents, and drive root-cause analysis.
What We’re Looking For
·3–5 years in data engineering, software engineering, or a related role.
·Strong SQL skills with experience optimizing complex queries on large datasets (TB-scale).
·Hands-on experience with AWS S3 and Lambda.
·Experience with SingleStore or a comparable analytical database (e.g., ClickHouse, Snowflake, BigQuery).
·Solid understanding of data modelling (star schema, ERD, normalization trade-offs).
·Experience working with software engineers to influence data architecture decisions.
·Familiarity with ETL/ELT orchestration tools (e.g., Airflow, dbt, Apache Dolphin).
·Proficiency in Python or another scripting language for pipeline development.
·Prior experience in the e-commerce or tech industry preferred.