About the role
At ShopBack, our engineering teams build scalable platforms and utilize leading technologies to build a world- class product. You will join a diverse and talented team of aspiring engineers with great ambition to impact the eCommerce landscape. We are seeking team members who strive to solve hard problems, take pride in delivering world- class products, and are strong team players. You will get an opportunity to work on building scalable data systems that help drive the organization’s data- driven decision- making and play a direct impact on growth.
Our Data Platform team builds and maintains the foundation that powers ShopBack’s analytics and decision- making.
We design and operate data pipelines across AWS S3, Apache Iceberg, and Trino, orchestrated through Airflow and modeled via dbt- on- Spark. You’ll work alongside experienced data engineers who value clean data, efficient systems, and thoughtful design, not just working code.
Your Adventure Ahead
Learn to design and optimize HUDI and Iceberg tables for performance and reliability
Write and validate SQL transformations consumed in Trino and Metabase
Build and maintain data models and pipelines using Spark and Apache Airflow
Use AI tools (e.g., ChatGPT, Cursor, Claude Code) to assist coding and documentation, and learn how to verify their output
Document learnings and share improvements through Confluence and Slack
Collaborate with senior engineers to improve data quality and observability
What you will learn:
Building reliable, testable dbt models on top of large datasets
Practical debugging, observability, and version control in a real system
End- to- end flow: ingestion → transformation → analytics
How modern data lakehouses (Spark + Iceberg + Trino) work in production
How to collaborate effectively in a hybrid engineering team
Essentials to Succeed
Clear communication, asks questions early, shares progress regularly
Comfort experimenting with AI coding assistants responsibly
Curiosity about data pipelines, storage formats, and data quality
Strong interest in data engineering or data systems
Familiarity with Python and SQL (school projects or self- taught is fine)
Bonus: exposure to AWS, dbt, Spark, or Trino
APPLY HERE: