Data Engineering Intern

  • Applications may have closed

Coinhako is a market-leading platform for access to digital assets like Bitcoin from Singapore

Founded in 2014, Coinhako has established a reputation as being one of the most secure and trusted digital asset wallet service providers and trading platforms in the APAC market
Our team is deeply passionate about building the crypto economy in the APAC region
Through the launch of our innovative suite of products and services, Coinhako aims to empower individuals and businesses by allowing them to take ownership and control of how they build and manage their assets in the new digital finance world

In line with our expansion, we are looking for motivated individuals who are passionate about the crypto space

You will join our team as Data Engineer Intern in the Data Platform Team

What you’ll be doing:

  • Build scalable, highly performant infrastructure for delivering clear business insights from a variety of raw data sources
  • Develop and maintain batch & real-time data pipelines/ analytical solutions, prototypes, and proofs of concept for selected solutions
  • Develop, improve and simplify frameworks and tools to empower our data analysts and internal teams to make data-driven decisions
  • Implement analytical projects with a focus on collecting, managing, analyzing, and visualizing data
  • Deploy and use various big data technologies and run pilots to design and improve low latency data architectures at scale
  • Work well independently but also in teams, with technical software engineers who can help with infrastructure set up, but be the expert on data related matters

What we’re looking for:

  • Someone who is a self-starter and is willing to take on new challenges and be comfortable working in a fast-paced and flexible environment
  • Bachelor’s degree in Computer Science or related field
  • Proficient in Python and SQL
    Experiences with Java / Scala is a plus

Bonus:

  • Knowledge of distributed systems and data architecture (lambda), how design and implement batch and stream data processing pipelines, data modeling, optimization, and partitioning is a plus
  • Working knowledge and understanding of Big Data technologies such as Kafka, Airflow, Apache Spark, etc
  • Experience with complex data modelling, ETL design, and using large databases in a business environment
  • Experience with Big Data transformation technologies, especially Spark
  • Exposure to Cloud technologies on AWS or Azure
  • Experience with building data pipelines and applications to stream and process datasets

What’s in it for you:

  • Friendly and fun start-up work culture
  • Convenient work location located in the heart of CBD area
  • Vibrant office with a well-stocked pantry
  • Animal-friendly environment, with a fluff ball in the office

Find out more about Coinhako here and don’t forget to visit our Careers Page

Listed in: , , , , , , , , ,