— Experience in building, improving, and monitoring data pipelines (ETL/ELT) for big data environments;
— Knowledge of technologies Microsoft Azure Data Factory, Databricks, Machine Learning services, Power BI, Logic Apps;
— Knowledge of data infrastructure monitoring and support;
— Have an agile mindset and believes in CI/CD way of working;
— English: Intermediate+ (daily communication).
— Experience with Jupyter Notebook, Hadoop eco-system, Python, Java, SQL, Docker/Kubernetes, Spark, Grafana, ELK stack;
— Previous work experiences as a data engineer, database administrator, data analyst, or similar;
— Knowledge of technologies Microsoft SQL (+HA), Microsoft SQL SSRS, SSAS, SSIS;
— Certifications of data-related competencies (DBA, OCP, and other).
— Possibility to work remotely or in the office;
— Competitive above-average salary;
— Challenging tasks and professional growth;
— Paid vacations, public holidays, and sick leaves;
— Professional education, sports, and healthcare reimbursement (50% compensation);
— IT Cluster/Club bonus program.
— Provide, implement and configure tools & data infrastructure — for data analysts, data scientists, and business stakeholders to efficiently work with large amounts of data, discover unique insights, and present them in the best possible way.
— Shape and customize our solution in modern data technologies. Working with stakeholders, users, and other teams to build, innovate, continuously improve both our data platform, and our ways of working;
— Support customer data infrastructure and solutions delivered according to SLA.