Skip navigation EPAM

Data Engineer with Databricks Hungary or Remote

Data Engineer with Databricks Description

Job #: 90254
EPAM is a leading global provider of digital platform engineering and development services. We are committed to having a positive impact on our customers, our employees, and our communities. We embrace a dynamic and inclusive culture. Here you will collaborate with multi-national teams, contribute to a myriad of innovative projects that deliver the most creative and cutting-edge solutions, and have an opportunity to continuously learn and grow. No matter where you are located, you will join a dedicated, creative, and diverse community that will help you discover your fullest potential.

No less important is the safety, well-being, and experience of our applicants. Therefore until further notice, all EPAM employment interviews will be conducted remotely. Our recruitment professionals and hiring managers are standing by to ensure a robust and engaging virtual candidate experience. We look forward to speaking with you!

DESCRIPTION


We are looking for a Data Engineer with Databricks knowledge to join our team in Hungary.

The Data Engineer should have subject matter expertise integrating, transforming, and consolidating data from various structured and unstructured data systems into structures that are suitable for building analytics solutions.

Responsibilities for this role include helping stakeholders understand the data through exploration, building and maintaining secure and compliant data processing pipelines by using different tools and techniques. Our Data Professionals use various data services and frameworks to store and produce cleansed and enhanced datasets for analysis.

What You’ll Do

  • Consult and help our diverse teams to develop, implement and maintain sustainable, high-performance, growth-ready data-processing and data integration systems
  • Design, construct, install, test and maintain highly scalable and optimized data pipelines with state-of-the-art monitoring and logging practices
  • Bring together large, complex, and sparse data sets to meet functional and non-functional business requirements and use a variety of languages, tools and frameworks to marry data
  • Design and implement data tools for analytics and data scientist team members to help them in building, optimizing, and tuning of use cases
  • Leverage and improve a cloud-based tech stack that includes AWS, Databricks, Kubernetes, Spark, Airflow, Python

What You Have

  • 3+ years of hands-on experience in data processing focused projects
  • Expertise in Apache Spark along with Spark streaming & Spark SQL
  • Good hands on experience with Databricks and delta-lake
  • Ability to build Apache Airflow pipelines
  • Proficiency with Python and SQL
  • Good hands on experience with cloud provider such as AWS, Azure or GCP
  • Understanding of relational database management systems
  • Relevant experience with version control and code review
  • Knowledge of Agile methodologies

We offer

  • Dynamic, entrepreneurial, high speed, high growth corporate environment
  • Diverse multicultural, multi-functional, and multilingual work environment
  • Opportunities for personal and career growth in a progressive industry
  • Global scope, international projects
  • Widespread training and development opportunities
  • Unlimited access to LinkedIn learning solutions
  • Competitive salary and various benefits
  • Sport and social teams support, recreation area, advanced CSR programs

Witaj. W czym możemy pomóc?

NASZE LOKALIZACJE