Open role

Data Engineer

San Francisco, CA (On-site) · Full-time

As a Data Engineer, you'll design and manage the data pipelines and storage solutions that handle our platform's information. This role ensures that everything from transaction records to audit logs is stored efficiently, accurately, and can be retrieved or analyzed when needed. You will also help enable analytics on platform usage and performance, contributing to data-driven decisions both for our clients and internally.

Responsibilities

Design and maintain database schemas and data models for various modules (accounts, transactions, ledgers, user activity, etc.), ensuring data integrity and high performance at scale.
Build and optimize data pipelines for processing and transferring data (e.g., moving transaction data into reporting databases or exporting data for clients' needs) in a reliable and timely manner.
Develop data warehousing solutions or utilize analytics databases to aggregate large datasets and support reporting, BI dashboards, and ad-hoc analysis.
Implement ETL/ELT processes to integrate data from external sources (like market data feeds or bank systems) and to feed data into external systems as needed.
Collaborate with backend engineers to ensure that the application’s data storage and retrieval logic is efficient (especially critical given our high throughput requirements).
Implement data management best practices including backups, data retention policies, archiving of old data, and ensuring data security/privacy in all pipelines.
Troubleshoot and resolve any data-related issues such as data quality problems, pipeline failures, or database performance bottlenecks.

Requirements

4+ years of experience in a data engineering role.
Strong proficiency in SQL and experience with one or more relational database systems (e.g., PostgreSQL, Microsoft SQL Server, MySQL), including complex query writing and performance tuning.
Familiarity with big data or NoSQL technologies (such as Kafka, Hadoop/Spark, Cassandra, etc.) and the ability to apply them when appropriate for scaling or streaming data.
Experience building and maintaining ETL pipelines – knowledge of tools like Airflow, AWS Glue, or custom scripting for data workflows.
Understanding of data modeling and warehousing concepts (star schemas, OLAP vs OLTP, etc.).
Experience with data visualization or BI tools (or supporting their data needs) is a plus.
Detail-oriented with a focus on data accuracy and reliability; understanding of the importance of data in a financial context and commitment to maintaining its quality.
Good communication skills to work with cross-functional teams (from engineering to product to business) on data needs and interpretations.

Submit your application

Provide a few details and our hiring team will reach out with next steps.

Candidate details

Optional, max 5MB.

We email a confirmation to the hiring team.