Job ref no.: 1800030712 (CT3114814-01#0251)
Standard Chartered Bank

Data Engineer - Virtual Banking

Standard Chartered Bank

The Role Responsibilities
We're looking for a Data Engineer to work on site with our development and data science teams in our offices in Hong Kong. We work in project-based sprints in small, interdisciplinary teams.

As a Data Engineer you'd be responsible for the design, creation and maintenance of analytics infrastructure that enables almost every other function in the data world. You will be responsible for the development, construction, maintenance and testing of architectures, such as data lake, warehouses, databases, data pipelines and large-scale processing systems. As part of Data Engineering team, you are also responsible for the creation of data set processes used in modelling, mining, acquisition, and verification.

  • Collaborate closely with our development and product teams in our fast-paced delivery environment
  • Design, build and maintaining modern, automated, cloud native, analytics infrastructure.
  • Build and manage data warehouses, databases, data pipelines.
  • Understand and translate business needs into data models supporting long-term solutions. Work with the development team to implement data strategies, build data flows and develop conceptual, logical and physical data models that ensure high data quality and reduced redundancy

Qualifications and Education Requirements

  • Knowledge of technology best practices for building a modern data lake, data warehouses and data pipelines
  • Good understand of technologies and experience in building a highly scalable and fault tolerant cloud data platform
  • Self-starter, capable of working without direction and able to deliver projects from scratch
  • Good practical experience and knowledge in building and maintaining Data Warehousing/Big Data Tools - Hadoop and MapReduce, Apache Spark and Spark SQL, HIVE
  • In-Depth Database Knowledge of RDBMS (PostgreSQL and MySQL) and NoSQL (HBase)
  • Strong experience in building and maintaining cloud Big Data and ETL tool, Google Big Table, Big Query and Air Flow (Google Composer)
  • Strong knowledge and experience with Apache Beam in implementing batch and streaming data processing jobs, strong Development background in Python or Java.
  • Strong knowledge in messaging systems like Kafka, RabbitMQ and Google Pub/Sub.
  • Experience with Agile/Lean projects SCRUM, KANBAN etc.
  • Practical knowledge with Git flow, Trunk and GitHub flow branching strategies.
  • Strong English communication skills

Preferred Skills

  • Container Management and container orchestration experience – Docker, Kubernetes
  • Monitoring tools Elastic Stack, Prometheus, Grafana
  • Breadth of knowledge – operating systems, networking, distributed computing, cloud computing,
  • Familiar with Big Data Technologies (AWS RedShift, Panoply), ETL Tools (StitchData and Segment) Machine Learning technologies and environments. 

Apply Now to join the Bank for those with big career ambitions.

More job information
Job ref no. 1800030712 (CT3114814-01#0251)
Salary
  • N/A
Job Function
Industry
Employment Term
  • Permanent
  • Full-time
Experience
    N/A
Career Level
  • Middle management level
Education
  • Degree