请将申请邮件发送到:viola.ke@emc.com
工作地点:上海黄浦区南京西路(靠近人民广场)
为了便于识别您的申请,请将邮件按照如下标题书写。请在邮件中说明每周能工作的天数和持续工作的月数。
“PivotalHD Intern - 姓名 - 学校 - 专业 - 年级 ”
============== Job Description ==========================
At Pivotal, our mission is to enable customers to build a new class of applications, leveraging big and fast data, and doing all of this with the power of cloud independence. Uniting technology, people and programs from EMC and VMware, the following leading products and services are now part of Pivotal: Greenplum, Cloud Foundry, Spring, GemFire and other products from the VMware vFabric Suite, Cetas and Pivotal Labs.
Are you passionate about building great software products? Are you looking to work on the state of the art technology and big data?
Pivotal Hadoop engineering team is looking for world-class, fun-loving engineers to join our growing team.
Job Responsibilities:
You will be responsible for the design and development of Pivotal's industry-leading Big Data product built around the Apache Hadoop ecosystem. You are not only expected to gain a deep understanding of specific areas of the Hadoop stack (such as HDFS, Hive, HBase, etc), but to be able to understand the challenges and intricacies of deploying, monitoring, managing and optimizing really large scale distributed data systems such as Hadoop, with the goal of making it ready for various enterprise environments, either on bare-metal or in the cloud. You will work with end customer, sales and internal Pivotal driven requirements and translate them into highly scalable, robust software modules. You may also get an opportunity to contribute to the open source community.
Requirements:
MS or BS major in CS
Strong development experience in core Java and Linux environment.
Familiar with one scripting language – perl, python, ruby, etc.
Good verbal and written communication skills
Understand the entire product life cycle: design, implementation, testing and deployment
Desired Skills:
Some experience developing for large-scale system software
Some understanding of big data, cloud computing, scalable and high performance environments
Experience with Hadoop, distributed systems – a big plus!
--
FROM 140.207.170.*