Senior Data Engineer

Urgent

Apply for this job

Email *
Full Name *
CV Attachment *
Browse

Upload file .pdf, .doc, .docx

Job Description

We’re looking for a Senior Data Engineer who is passionate about data engineering to join our dynamic team. The ideal candidate will have solid experience in designing, developing, and maintaining data pipelines, optimizing databases, and creating innovative solutions to manage large volumes of information. You will collaborate with internal technical teams and business-oriented clients, helping to transform data requirements into efficient solutions. If you have strong SQL knowledge, ETL experience, and skills with Big Data platforms, we want to meet you!

Essential Responsibilities:

  • Interact with technical teams and business-oriented clients to analyze data requirements and design suitable solutions.
  • Collect, clean, transform, and structure data from various sources (databases, APIs, logs).
  • Manage databases, create and maintain structures, and optimize query performance to ensure data consistency and integrity.
  • Develop and maintain ETL pipelines to extract, transform, and load data into databases or data warehouses.
  • Monitor data quality, implement validations, and troubleshoot issues to ensure data accuracy and reliability.
  • Work with modern data workflows (DBT, Airflow, Prefect, Dagster) and cloud platforms (Snowflake, Google BigQuery, Azure Synapse).
  • Collaborate on real-time data analysis projects using technologies such as Amazon Kinesis and Apache Kafka.
  • Participate in the creation and maintenance of conceptual, logical, and physical data models.
  • Support process automation through CI/CD (Gitlab, GitHub Actions, Jenkins) and programming languages such as Python, JavaScript, Kotlin, or Java.

Minimum Qualifications:

  • Over 5 years of experience in data engineering roles, with a strong focus on data management excellence.
  • Advanced command of SQL and experience writing complex queries, optimizing database performance, and designing efficient schemas.
  • Intermediate experience with ETL processes and familiarity with data storage solutions such as Amazon Redshift, Google BigQuery, or Snowflake.
  • Practical experience with Big Data platforms (Apache Spark, Presto, Amazon EMR) and databases such as SQL Server, PostgreSQL, MySQL, Oracle, Vertica.
  • Demonstrated skills in creating and maintaining data models (conceptual, logical, and physical).
  • Familiarity with container management systems such as Kubernetes.

Additional Requirements:

  • Software engineering experience is an advantage.
  • Knowledge of visual analytics tools (Tableau, Looker, PowerBI) and artificial intelligence/machine learning tools (Amazon Sagemaker, Azure ML Studio) is preferred.
  • Ability to work with real-time data using technologies like Amazon Kinesis or Apache Kafka.
  • Strong problem-solving skills and the ability to work in multidisciplinary teams.

Benefits:

  • Competitive salary and performance-based bonuses.
  • Professional development opportunities, including training and certifications.
  • Flexible working hours and remote work options.
  • Collaborative and innovative work environment.