Please note all applicants must be living and eligible to work in the USA
WHAT YOU’LL DO:
- Support our externally-facing data APIs and applications built on top of them
- Build systems and services that abstract the engines and will allow the users to focus on business and application logic via higher level programming models
- Improve and maintain data pipelines and tools to keep pace with the growth of our data and its consumers
- Identify and analyze requirements and use cases from multiple internal teams (including finance, compliance, analytics, data science, and engineering); work with other technical leads to design solutions for the requirements
WHAT WE’RE LOOKING FOR:
- Deep experience with distributed systems, distributed data stores, data pipelines and other tools in cloud services environments (e
g AWS, GCP) - Experience with distributed processing compute engines like Hadoop, Spark, and/or GCP data ecosystems
- Experience with stream processing frameworks such as Kafka , Storm , Flink, Spark streaming
- Strong experience with Python, Java, Go or similar
- Experience building backend services and data pipelines
- Familiarity with Unix-like operating systems
- Experience with database internals, database language theories, database design, SQL and database programming
- Familiarity with distributed ledger technology concepts and financial transaction/trading data
- You have a passion working with great peers and motivating teams to reach their potential
- You are a strong partner to other engineering and non-engineering teams and can drive cross-functional projects forward
- You have experience building internal infrastructure that is shared across teams