简历请发送到 jingxu6@paypal.com
955,最佳雇主, 在家办公到Q3, 大量假期 ,每六周周五多放假一天,重新定义大小周,
带薪休假 入职第一年和第二年有15天带薪休假,第三年起每多工作一年,增加一天,20天封顶
带薪Sabbatical 工作满5年提供额外4个星期的带薪休假
职位描述:
数据开发工程师24/25 ,适合毕业2-7年的同学,下面职位描述的年限可以放宽。
核心技能:SQL
其他加分项目:Hadoop, Hive , Kafka, Airflow
英语要求:基本交流
简历请发送到 jingxu6@paypal.com
Job Summary:
As a Data Engineer, you will be responsible for working on data applications and services that are vital to support GoPay business decision. You will work in a fast-paced environment where continuous innovation and experimentation are a given.
Job Description:
Works with GoPay business units and Product Dev teams to design, develop and deliver data solutions.
Supports GoPay business units by providing data in a ready-to-use form to data analysts and data scientists for Business insights, predictive analytics, machine learning, etc.
Owns and is accountable for the design and development of a Data solution feature or a Data Pipeline.
Code is well-commented, easy to maintain, and can be reused across a sub-system or feature. Code may persist for the lifetime of a software version
Code is thoroughly tested with very few bugs, and is supported by unit tests.
Recognized as the go-to developer for a product or major system.
Participate in feature or component design reviews and code reviews and is fully recognized as the go-to developer for that component.
Participate in architecture discussions, propose solutions to system and product changes that are directly related to their area of focus.
Responsible for managing multiple Applications, providing necessary support and maintenance activities.
Should be comfortable working in an agile environment and with cross-functional teams, should have appetite to learn and be flexible to pick up new technology.
Job Requirements:
EE/CS or related majors. Bachelor and above.
7+ years of post-college working experience in Data engineering related subject area.
Can be relied on to deliver a data pipeline on time and to requirements, without data quality issues.
Able to evangelize best practices through prototyping or other means.
Helps to resolve site data base issues and SLA, RTB impacts.
Good understanding of Data Modelling Concepts, experience with modeling data and metadata to support relational & NOSQL database implementations.
Familiar with various big data technologies, open source data processing frameworks.
Evaluates and implements data solutions with various big-data technologies.
Good understanding of data processing, data structure optimization and design for scalability.
Optimize ETL processing of extremely large datasets; Optimization for SLAs
Understand of REST-oriented APIs , understanding of distributed systems, data streaming, NoSQL solutions for creating and managing data integration pipelines.
Excellent communication and team player skills to collaborate with cross functional teams for successful delivery of Data solutions.
Understanding of version control systems, particularly GIT.
Strong analytical and problem solving skills.
Good understanding of database principles and SQL beyond just data access.
Expert in multiple Programming/scripting languages, i.e. Shell Scripting, Python, Java,etc
Workable knowledge on following technologies:?
SQL expert (ansi SQL, hive SQL, spark SQL, T-SQL etc)
Spark with Python or Scala? or Java
Hive
Database fundamentals (Oracle or MySQL or Teradata)
Hadoop ecosystem
Airflow is a plus
Kafka is a plus
ETL tool expereince is nice to have:
Informatica, datastage
Excellent oral and written communication skills in English.
--
FROM 124.64.125.*