Apply for this job

Email *
Full Name *
CV Attachment *
Browse

Upload file .pdf, .doc, .docx

Job Description

We are currently looking for an experienced Data Engineer to join their Business Intelligence and Analytics team. The successful candidate will be responsible for developing and maintaining data marts by contributing towards ETL workflows. The candidate would also be required to work with Product to understand requirements and build dashboards to support the analytic needs of customers of the company. The person in the Data Engineer role would augment the capabilities of the BI team in delivering the company´s BI initiative.

Key Responsibilities

  • Develop and maintain an efficient data pipeline architecture that meets business requirements.
  • Assemble large and complex data sets to meet functional and non-functional business requirements.
  • Identify opportunities for internal process improvements, such as automating manual processes, optimizing data delivery, and infrastructure scalability.
  • Construct the necessary infrastructure for extracting, transforming, and loading data from various sources like MSSQL, MYSQL, and Flat files in S3 using SQL, Python, and AWS data technologies into data marts on Redshift.
  • Collaborate with stakeholders, product teams, data teams, and design teams, to address technical data-related issues and support their data infrastructure needs.
  • Develop data tools that support analytics and data science in building and optimizing the company’s product into an innovative industry leader.
  • Work with Product Management to validate requirements for BI and author datasets and dashboards.
  • Set up monitoring and alerting on data pipelines and dashboards.

Qualifications

  • 3+ years of experience as a Data Engineer.
  • 3+ years of advanced SQL skills and experience working with relational databases, including query authoring in SQL, familiarity with a range of RDMS databases, Change Data Capture usage, and (materialized) view creation.
  • 3+ years’ experience working with Python or any scripting language.
  • Experience developing ETL scripts to populate Data marts / Lakes.
  • Experience working with data sources like MySQL, SQL server, and data warehouse technologies like Redshift.
  • Experience in running ETL jobs and workflows in AWS environments such as AWS Glue, AWS Lambda, and Apache Airflow (MWAA).
  • Have expertise in constructing and optimizing data pipelines, data architectures, and datasets.
  • Capable of performing root cause analysis on both internal and external systems and processes in the pipeline or infrastructure.
  • Possess strong analytical skills and experience with a data visualization tool (e.g.: Quick sight, Tableau, Power BI)
  • Develop processes that support data transformation, data structures, metadata, dependency, and workload management.
  • Have a track record of successfully manipulating, processing, and extracting valuable insights from large and disparate datasets.
  • Experience working with git repos, CI/CD, and Unit tests.
  • Demonstrate excellent communication and organizational skills.
  • Experience working with cross-functional teams in a fast-paced environment, providing support as needed.

But What You Really Need for this Position:

  • Possess strong cognitive abilities and a keen aptitude for learning.
  • Demonstrate a strong inclination towards automating routine tasks.

Job Benefits

  • Full work from home (5 days a week)
  • Private Medical Insurance.
  • Educational aids and paid subscriptions, certifications and exams.
  • Schedule flexibility.
  • Career Growth.
  • Recreational Company Activities.
  • Free nutritionist for all employees.
  • Discounted rates at many national couriers and commerces.