RESPONSIBILITIES
As a data engineer, you will be responsible for designing, implementing, testing and maintaining scalable data pipelines that bring together data from an array of sources
To achieve this, you will need to consolidate, cleanse, and structure the data, as well as maintain the automated scripts, data pipelines, and databases that comprise the data platform
This data provides researchers with insights into blockchain protocols to enable them to perform analytical work
It also drives public and internal dashboards and reports
ABOUT YOU
You like to build robust, scalable, maintainable, and well-documented data pipelines
You are passionate about blockchain and are comfortable working with large datasets
You have exquisite attention to detail and a strong appreciation for data integrity and correctness
REQUIREMENTS
- A background in Computer Science and Data Science
- Experience administering relational databases (Snowflake, PostgreSQL)
- Experience designing and maintaining data pipelines and ETL/ELT processes
- Experience developing custom data connectors
- Fluency in Python programming to intermediate level
- A good understanding of the Ethereum protocol
- Proficiency in SQL, GraphQL, and query optimization
- Good knowledge of Unix shell scripting and commandline tools
- Familiarity with cloud services (Â AWS EC2/S3, Google Cloud)
- Excellent verbal and written English skills
BONUS POINTS
- Proficiency with Python scientific computing (NumPy, Pandas, Jupyter)
- Experience with managing and processing financial market timeseries data
Listed in: , , , , ,