Principal Engineer, Data Engineering

Ripple is the world’s only enterprise blockchain solution for global payments. Today the world sends more than $155 trillion across borders. Yet, the underlying infrastructure is dated and flawed. Ripple connects banks, payment providers, corporates and digital asset exchanges via RippleNet to provide one frictionless experience to send money globally.

In this role, you will architect and implement the data infrastructure for analytics and data centric product feature at Ripple, which will include creating our complete data Platform for unified data Ingestion, distributed systems for processing, self serve data lakes and batch/stream ETL data pipeline for golden datasets for analytics etc. Successful candidates will be able to demonstrate an ability and history of thoughtfulness and curiosity in data ingestion, generation, and pipelining, governance and security of Data at ripple.

This is a senior, high-visibility role that requires a clear architectural vision, the desire to rapidly code, very fast shipping pace, exacting communication and leadership skills, and the ability to educate our firm on the technology we are developing and shipping. You will also represent Ripples Data Platform and Engineering as an expert member, responding to conversations, seeding ideas, and participating architectural discussions.

About The Role

  • This position is responsible for architecting, designing, implementing, and managing data platforms for Ripple on various hyper scale platforms (AWS, GCP).
  • You will partner with other engineers and product managers to translate data needs into critical information that can be used to implement scalable data platforms and self-service tools. 
  • Collaborate and ensure business teams by providing technical input to Data Governance policies, standards, and processes related to data with clear data classification and data ownership, access, and security (privacy & protection) of sensitive data.
  • Work with service teams and other engineering and business partners on Data Infrastructure and Engineering roadmap planning to build the database infrastructure for future scale and volume.
  • Keeps observability as a focus for all database monitoring and improve/implement auto remediation techniques.
  • Partner with service and performance teams for continuous architecture improvements, resiliency, and performance.
  • Own the delivery, quality, and reliability of our Financial Data Hub 
  • Develop data migration architecture for scale and strategy for data migration across clouds.
  • The ideal candidate will have strong hands-on experience in designing, developing, and managing enterprise level database systems with complex interdependencies and key focus on high-availability, clustering, cloud migration, security, performance, and scalability.

Key Responsibilities

  • At least 12+ years’ experience in designing and developing enterprise data architecture and engineering solutions that have supported massive workloads and data scale/volume.
  • Experience working with private and public clouds (AWS, GCP) and capacity management principles.
  • Design, and implement a scalable data lake, including data integration and curation
  • Build modular set of data services using Python/Scala,BigQuery/Presto SQL, API Gateway, Kafka, Apache Spark on EMR/data proc among others
  • Deep knowledge in Data Warehouse architecture and integration
  • Research, design, and experiment to execute fast proof of concepts to evaluate similar products.
  • Participate in the strategic development of methods, techniques, and evaluation criteria for projects and programs. This will include assessment of build vs buy decisions at every stage, backed by proof of concepts, benchmarking, etc.
  • Experience working autonomously and taking ownership of projects.
  • Create data applications with ability to do searches, real time data alerts, APIs to pull the data on a large volume of data.
  • Design and implement innovative data services solutions using Microservices and other UI and API related technologies
  • Implement processes and systems to manage data quality, ensuring production data is always accurate and available for key partners and business processes that depend on it.
  • Writes unit/integration tests, contributes to engineering wiki, and documents work.
  • Work closely with a team of frontend and backend engineers, product managers, and analysts.
  • Coaching other engineers on best practices for designing and operating reliable systems at scale
  • Design data integrations and data quality framework.
  • Execute the migration of data and processes from legacy systems to new solutions.
  • Perform production support and deployment activities
  • Manage the system performance by performing regular tests, solving problems and integrating new features.
  • Offer support by responding to system problems in a timely manner.


  • The chance to work in a fast-paced start-up environment with experienced industry leaders
  • A learning environment where you can dive deep into the latest technologies and make an impact
  • Competitive salary and equity
  • 100% paid medical and dental and 95% paid vision insurance for employees starting on your first day
  • 401k (with match), commuter benefits
  • Industry-leading parental leave policies
  • Generous wellness reimbursement and weekly onsite programs
  • Flexible vacation policy – work with your manager to take time off when you need it
  • Employee giving match
  • Modern office in San Francisco’s Financial District
  • Fully-stocked kitchen with organic snacks, beverages, and coffee drinks
  • Weekly company meeting – ask me anything style discussion with our Leadership Team
  • Team outings to sports games, happy hours, game nights and more!