Why InterviewStack
Jobs from hundreds of ATS platforms and career pages β every listing AI-enriched with verified roles, normalized skills, salary data, and benefits.
Every listing is processed by AI to extract role types, skills, experience levels, salary ranges, and benefits β even when the original posting is incomplete.
Aggregated from Greenhouse, Lever, Ashby, SmartRecruiters, Workday, BambooHR, Personio, and more β across Engineering, Product, Data, Design, and other tech roles.
Found a role you like? Use our preparation guides, AI mock interviews, and question banks to prepare specifically for the company and role before you apply.
Aspora
People on the move deserve a bank that moves with them. Since 2022, Aspora has been building a borderless financial operating system that makes money as mobile and transparent as its users.
Backed by influential venture capitalists like Sequoia Capital, Greylock Partners, Hummingbird Ventures, Y Combinator & Global Founders Capital. We're a team of 75+ across India, the UK, the UAE, EU and the US, working with extreme ownership, radical candour, and an obsession with customer impact.
We celebrate builders who question assumptions, ship fast, and turn regulatory complexity into elegant solutions. If youβre driven to redefine what global banking can be, weβd love to build the future with you.
You'll join our Data Platform team and get hands-on experience building and improving the infrastructure that powers analytics and machine learning across the company. You'll work alongside senior engineers on real problems β not toy projects.
Pipeline development
Contribute to ETL/ELT pipelines using Python and SQL. Learn to write idempotent, testable pipeline code with guidance from your team.
Spark & Databricks exploration
Run and optimise PySpark jobs, explore execution plans, and understand how data moves through our lakehouse (Delta Lake).
Orchestration & scheduling
Write and debug Airflow DAGs. Learn dependency management, alerting patterns, and how SLAs are enforced in production.
Data quality & observability
Help build automated data quality checks and learn how the team monitors pipeline health and responds to incidents.
Developer tooling
Contribute to internal libraries and pipeline templates that make the broader data engineering team more productive.
Currently pursuing a degree in Computer Science, Data Engineering, Software Engineering, or a related field
Comfortable writing Python and SQL β you've used them in coursework, projects, or internships
Familiar with core data concepts β relational databases, querying, and how data flows between systems
Basic understanding of version control (Git) and how software is developed collaboratively
Curious, self-directed, and comfortable asking questions when stuck
Any exposure to distributed computing or big data tools (Spark, Hadoop, Kafka) β even from a class or online course
Experience with cloud platforms (AWS, GCP, or Azure) β even personal or project-level use counts
Projects (personal, academic, or open source) that show you enjoy working with data at scale
This job is found at InterviewStack.io