Our Big Data Engineers focus on collecting raw data from various data sources and transforming them into a standard format so that the data can be easily consumed by analysts and downstream applications.
We develop and maintain data ingestion jobs. We also maintain Hadoop systems and are responsible for any infrastructure change.
Big Data Engineers will work closely with data stakeholders, such as host systems and data scientists, and are largely in charge of architecture solutions to connect them.
Design data warehouse architecture. Setup and maintain data warehouse infrastructure.
Setup and maintain data warehouse applications, such as data browser, job scheduler, etc.
Monitor data warehouse status and hardware capacity, optimize system performance.
Work with stakeholders including the data engineers, data analysts, and other data users to assist with data-related technical issues and support their data infrastructure needs
Conduct research on trending data platform applications and solutions. Create POC (proof of concept) to innovative ideas, and come up with practical proposals
Bachelor's degree or higher in Computer Science, Computer Engineering or related fields
Self-learner with a strong sense of ownership
Passionate about coding and programming, innovation, and solving challenging problems
Programming knowledge in Shell, Python, Scala or Java. In depth understanding of operating system, networks, and other computer fundamentals
Database knowledge in MySQL, SQL Server, MongoDB, Redis or others is a plus
Practical experience in Hadoop / Spark, Linux / Unix development, and Cloud Solutions is a plus