Data Engineer

Data Engineer

At Chargify

Date Posted: 05 June, 2021

Location: USA Only, TELECOMMUTE

We are hiring a passionate Data Engineer to help Chargify build a world class data and reporting ecosystem. We want to empower our Data Science Team and give them the tools necessary to evolve Chargify into a true data-driven organization. We want to help Chargify evolve into a data-driven organization and it all starts with an amazing data platform.

Our existing data stack consists of MySQL and Elasticsearch with a number of 3rd party data sources. We have an existing Business Intelligence pipeline using Apache Airflow and Metabase and we would like a seasoned Data Engineer to help us build out our existing infrastructure and guide our tooling choices and data architecture into the future.

 

Responsibilities:

  • Work as part of our global DevOps team to build out the infrastructure and tooling to support our Data Team
  • Create and maintain testable ETL pipelines
  • Support our Data Team by providing a self-service data platform
  • Ensure our data platform is fast, secure, and compliant
  • Engage with our Software Engineering teams to understand ongoing development efforts and provide ways of serving the teams with BI solutions

We're looking for someone who can help us lead this effort. Someone with the experience and drive to help solve our ongoing BI & data reporting efforts.

 

Requirements

We require working experience with MySQL, Elasticsearch, as well as Apache Airflow and Metabase or other equivalent ETL & BI tools.

Experience

  • Engineering background with relevant language (Python, Ruby, etc.)
  • Advanced SQL knowledge (MySQL, Elasticsearch specifically)
  • Proven experience with data warehousing and ETL tools (Airflow, Metabase, etc.)
  • Working AWS knowledge (especially "data" products such as Redshift, Lambda, Kinesis, EMR, etc.)

Nice to have

  • A working knowledge of a data analytics language and its associated tools (Python/Jupyter, R, etc)
  • Experience with big data tools: Hadoop, Kafka, etc.
  • Experience with stream-processing systems: Storm, Spark-Streaming, etc.

 

Benefits

  • Work from anywhere in the US.
  • Open PTO policy (that we make sure gets used!)
  • Monthly developer stipend for learning resources, conferences, and courses.
  • Health, dental, and vision insurance plans
  • Medical and dependent care flexible spending accounts
  • 9 paid standard holidays each year in addition to open PTO
  • 401(k) savings plan
  • Company-paid Life, AD&D, and Disability coverage
Apply for the job