Senior Software Engineer (Big Data) Kuala Lumpur, Malaysia
Senior Software Engineer (Big Data) Description
Job #: 74608DESCRIPTION
We are looking for an experienced Senior Software Engineer (Big Data) to be placed in our Office in Bangsar South MY. This Senior Engineer will be roll-up the functional and technical aspect of Software Development with a big data technology in Project space to cloud services.
EPAM Systems, Inc. (EPAM) is a global product development, digital platform engineering, and digital and product design agency headquartered in the US. EPAM has been named 5 times in Forbes’ 25 Fastest Growing Public Tech Companies. Kuala Lumpur is the most recent location in which EPAM has acquired a new business. This is your opportunity to join an organization in start-up mode that plans to (at least) double in size every 2-3 years and the team that you are joining will be responsible for that.
Responsibilities
- Design and implement innovative analytical solution using Hadoop, NoSQL and other Big Data related technologies, evaluating new features and architecture in Cloud/ on premise/ Hybrid solutions
- Work with product and engineering teams to understand requirements, evaluate new features and architecture to help drive decisions
- Build collaborative partnerships with architects, technical leads and key individuals within other functional groups
- Perform detailed analysis of business problems and technical environments and use this in designing quality technical solution
- Actively participate in code review and test solutions to ensure it meets best practice specifications
- Build and foster a high-performance engineering culture, mentor team members and provide team with the tools and motivation
- Write project documentation
Requirements
- Candidate must possess at least Bachelor’s Degree in preferably Computer Science/ Information Technology or equivalent
- Candidates with a minimum of 3 year’s solid experience in Big Data technologies and Enterprise Software Development
- Engineering experience and practice in Data Management, Data Storage, Data Visualization, Disaster Recovery, Integration, Operation, Security
- Experience building data ingestion pipelines, Data Warehouse or Database architecture
- Experience with data modelling; hands-on development experience with modern Big Data components
- Good understanding of CI/CD principles and best practices
- Experience with Containers and Resource Management systems: Docker, Kubernetes, Yarn
- Solid skills in infrastructure troubleshooting, support and practical experience in performance tuning and optimization, bottleneck problem analysis
- English proficiency
Technologies
- Relevant:
- Programming Languages: Java; Python; SQL
- Big Data stack: Hadoop, HDFS, Hive, Spark, Kafka, Sqoop, Zookeeper
- Queues and Stream processing: Kafka Streams; Spark Streaming; Event Hub; IOT Hub; Storage Queues; Service Bus; Stream Analytics
- ETL & Streaming Pipelines: Pentaho; Talend; Apache Oozie, Airflow, NiFi; Streamsets
- Operation: Cluster operation, Cluster planning
- Familiar with Cloud Experience (Azure/ GCP)
- Development Methods (TDD, BDD, DDD)
- Version Control Systems (Git, SVN)
- Testing: Component/ Integration Testing, Unit testing (JUnit)
- Experience in various messaging systems, such as Kafka, RabbitMQ
- Rest, Thrift, GRPC, SOAP
- Build Systems: Maven, SBT, Ant, Gradle
- Add-up as advantages:
- Search: Solr, Elasticsearch/ELK
- InMemory: Ignite, Redis
- Data Visualization: Tableau, QlikView
- NoSQL: Cassandra/ Hbase; MongoDB
- Enterprise Design Patterns (ORM, Inversion of Control etc.)
We offer
- Friendly team and enjoyable working environment
- Work-life balance and flexible schedule
- Online training library, mentoring, career development and potential partial grant of certification
- Unlimited access to LinkedIn learning solutions
- Referral bonuses
- Compensation for sick leave and paid time off
- Opportunities for self-realization