Consensys – Sr. Data Engineer (EU only, remote)



JOB TYPE: Freelance, Contract Position – No agencies (See notes below)

LOCATION: Remote (TimeZone:  Europe only, CET/WAT | Partial overlap   )

HOURLY RANGE: Our client is looking to pay $50k  – $150k USD / YR

ESTIMATED DURATION: 40Hrs/Week – Long Term


is the only network that gives in-demand talent all the freedom of freelance with all the benefits, community, and stability of a full-time role. As the first decentralized talent network, our revolutionary Web3 model ensures the community that relies on Braintrust to find work are the same people who own and build it through the blockchain token, BTRST. So unlike other marketplaces that take 20% to 50% of talent earnings, Braintrust allows talent to keep 100% of earnings and to vote on key changes to improve the network. Braintrust is working to change the way freelance works – for good.

Compensation includes salary, bonus, and equity. Compensation based on experience and location.


ConsenSys is a venture production studio and the leading technology firm in blockchain globally. We deliver products, solutions and platforms built using blockchain technology to transform how business is done in a complex network of buyers, suppliers and consumers.

Our teams are busy at work building the future of identity, financial markets, commerce, the music industry, security, infrastructure and more. To accomplish this we’ve built out a flat organizational structure which we call the ConsenSys Mesh: a network of individuals & teams working autonomously towards the same goal. Our mission is to use these decentralized solutions to fundamentally reshape the economic, social, and political operating systems of the planet.

Are you passionate about decentralizing our future and taking control of how we evolve? Then join us! We are seeking passionate, determined and resilient individuals who thrive in a self-directed, collaborative culture. Our technology is transforming global society and humanity – we welcome you to this exciting opportunity.

About Infura

Infura’s development suite provides instant, scalable API access to the Ethereum and IPFS networks. Our world-class infrastructure ensures that developers can reliably scale their decentralized application to meet user demand. It’s powered by a cutting-edge microservice-driven architecture that dynamically scales to support our APIs. 

Thousands of developers are actively using Infura every day and we process billions of events per day! The strength, reliability and stability of Infura’s infrastructure service makes the brand an essential pillar of the Ethereum ecosystem and the leading blockchain infrastructure provider.

Job Responsibilities: A day in the life of a Data Engineer

Infura is building a data platform to serve the needs of data scientists, business analysts, and end users. You will be collaborating with the product organization, data scientists, and business analysts to build both the underlying platform as well as data pipelines. Data pipelines will encompass both large-scale batch processing as well as real-time / stream data pipelines. You will design, develop, and operate data integration pipelines to provide high quality datasets for end user, analytical, and machine learning use-cases.

You will bring a deep understanding of software engineering best practices, such as proper use of source control, end to end testing process, and automated deployment mechanism to build a high quality data platform. While we have existing systems that require support and maintenance, you will also provide expertise to multiple teams at Infura by acting as a subject matter expert on best practices, tooling, and data engineering practices.

Required Skills

First and foremost, we want our data engineers to be great software engineers with a passion for writing high quality code. You appreciate agile software processes, data-driven development, reliability, and responsible experimentation. The ability to be pragmatic and articulate tradeoffs between a perfect solution and a working solution will make you successful in this role. The ideal candidate will have strong experience with various OLTP and OLAP databases, row and columnar oriented storage formats, ELT vs ETL processing, data lakes, and data warehousing. 

Using strong programming skills in a language such as Python, Scala, Java, or Go, you have built production quality ETL pipelines at scale using Spark, Hadoop or similar framework. Ideally, you will have worked with modern data stores: AWS S3 based data lakes, AWS Redshift, or Snowflake. We are looking for a senior level candidate that can understand requirements, work autonomously, and communicate effectively. If taking ownership of scoping problems and delivering pragmatic solutions sounds interesting, this is the role for you

Bonus points for:

We use Druid, so any practical experience with this would be a bonus. Deep experience with data engineering in the AWS ecosystem, experience with orchestration tools (such as Airflow) or experience with infrastructure management tools such as terraform is beneficial. If you don’t check these boxes, don’t sweat it. We’d love to hear from you.

Apply now!


Qualified candidates will be invited to do a screening interview with the Braintrust staff. We will answer your questions about the project, and our platform.  If we determine it is the right fit for both parties, we’ll invite you to join the platform and create a profile to apply directly for this project.  

C2C Candidates: This role is not available to C2C candidates working with an agency. But if you are a professional contractor who has created an LLC/corp around your consulting practice, this is well aligned with Braintrust and we’d welcome your application.  

Braintrust values the multitude of talents and perspectives that a diverse workforce brings. All qualified applicants will receive consideration for employment without regard to race, national origin, religion, age, color, sex, sexual orientation, gender identity, disability, or protected veteran status.