Staff Data Engineer
PayJoy
San Francisco, CA, USA
USD 270,932-304,288 / year
Posted on Dec 12, 2025
About PayJoy
PayJoy is a mission-first credit provider dedicated to helping under-served customers in emerging markets to achieve financial stability and success. Our patented technology for secured credit provides an on-ramp for new customers to enter the credit system. Through PayJoy’s point-of-sale financing and credit cards, customers gain access to a modern quality of life. PayJoy’s credit also allows our customers to seize opportunities as micro-entrepreneurs, and provide safety acts as insurance for tough times. Through our cutting-edge machine learning, data science, and anti-fraud AI, we have served over 18 million customers as of 2025 while achieving solid profitability for sustainable growth.
This role
The Staff Data Engineer is responsible for ensuring that the organization has reliable, accessible, and well-organized data by designing and maintaining systems that process information in real time and in batches, defining a data strategy aligned with business goals, ensuring data quality and security, optimizing database performance, automating processes to increase efficiency, overseeing monitoring and rapid issue resolution, and providing leadership and guidance to other teams to ensure best practices in data usage and management
Responsibilities
- Architect and Build Data Pipelines: Build, optimize, and maintain reliable, scalable, and efficient data pipelines for both batch and real-time data processing.
- Data Strategy: Develop and maintain a data strategy aligned with business objectives, ensuring data infrastructure supports current and future needs.
- Streaming Expertise: Lead the development of real-time ingestion pipelines using Kafka/Kinesis, and design data models optimized for streaming workloads.
- Data Quality & Governance: Implement data quality checks, schema evolution, lineage tracking, and compliance using tools like Unity Catalog and Delta Lake etc.
- Tool & Technology Selection: Evaluate and implement the latest data engineering tools and technologies that will best serve our needs, balancing innovation with practicality.
- Automation and CI/CD: Drive automation of pipeline deployments, testing and monitoring using Terraform, CircleCi or similar tools.
- Performance Tuning: Regularly review, refine, and optimize SQL queries across different systems to maintain peak performance. Identify and address bottlenecks, query performance issues, and resource utilization. Setup best practices and work with developers on education of what they should be doing in the software development lifecycle to ensure optimal performance.
- Database Administration: Manage and maintain production AWS RDS MySQL, Aurora and postgres databases. Perform routine database operations, including backups, restores, and disaster recovery planning. Monitor database health, diagnose and resolve issues in a timely manner.
- Knowledge and Training: Serve as the primary point of contact for database performance and usage related knowledge, providing guidance, training, and expertise to other teams and stakeholders.
- Monitoring & Troubleshooting: Implement monitoring solutions to ensure high availability and troubleshoot data pipeline issues in real-time.
- Documentation: Maintain comprehensive documentation of systems, pipelines, and processes for easy onboarding and collaboration.
- Mentorship & Leadership: Mentor other engineers, review PRs, and establish best practices in data engineering.
Requirements
- Bachelor’s or Master’s degree in Computer Science, Engineering, Data Science, or a related field.
- 12+ years of experience in data engineering, with at least 3+ years working in Databricks.
- Deep hands-on experience with Apache Spark (PySpark/SQL), Delta Lake, and Structured Streaming.
- Technical Expertise: Deep understanding of data engineering concepts, including ETL/ELT processes, data warehousing, big data technologies, and cloud platforms (e.g., AWS, Azure, GCP).
- Strong proficiency in Python, SQL, and data modeling for both OLTP and OLAP systems.
- Architectural Knowledge: Strong experience in designing and implementing data architectures, including real-time data processing, data lakes, and data warehouses.
- Tool Proficiency: Hands-on experience with data engineering tools such as Apache Spark, Kafka, Databricks, Airflow, and modern data orchestration frameworks.
- Innovation Mindset: A track record of implementing innovative solutions and reimagining data engineering practices.
- Experience with Databricks Workflows, Delta Live Tables (DLT), and Unity Catalog.
- Familiarity with stream processing patterns (exactly-once semantics, watermarking, checkpointing)
Benefits
- 100% Company-funded health, dental, and vision insurance for employee and immediate family
- Company-funded employee life and disability insurance
- 3% employer 401k contribution
- Company holidays; 20 days vacations; flexible sick leave
- Headphone, home office equipment and wellness perks.
- $2,000 USD annual Co-working Travel perk
- $2,000 USD annual Professional Development perk
- Commuter benefit
PayJoy is proud to be an Equal Employment Opportunity employer and we welcome and encourage people of all backgrounds. We do not discriminate based upon race, religion, color, national origin, gender (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender identity, gender expression, age, status as a protected veteran, status as an individual with a disability, or other applicable legally protected characteristics.
Finance for the next billion * Ownership * Break Through Walls * Live Communication * Transparency & Directness * Focus on Scale * Work-Life Balance * Embrace Diversity * Speed * Active Listening
270932 - 304288 USD a year