Offers “Unilever”

Expires soon Unilever

Data Scientist (Azure) - Contract role

  • Singapore, SINGAPORE
  • Studies / Statistics / Data

Job description



Unilever is one of the world's leading suppliers of Food, Home and Personal Care products with sales in over 190 countries and reaching 2 billion consumers a day. It has 172,000 employees and generated sales of €48.4 billion in 2014. Over half (57%) of the company's footprint is in developing and emerging markets. Unilever has more than 400 brands found in homes around the world, including Persil, Dove, Knorr, Domestos, Hellmann's, Lipton, Wall's, PG Tips, Ben & Jerry's, Marmite, Magnum and Lynx.

Unilever's Sustainable Living Plan (USLP) commits to:

• Decoupling growth from environmental impact.
• Helping more than a billion people take action to improve their health and well-being.
• Enhancing the livelihoods of millions of people by 2020.

Unilever was ranked number one in its sector in the 2014 Dow Jones Sustainability Index. In the FTSE4Good Index, it achieved the highest environmental score of 5. It led the list of Global Corporate Sustainability Leaders in the 2014 GlobeScan/SustainAbility annual survey for the fourth year running, and in 2015 was ranked the most sustainable food and beverage company in Oxfam's Behind the Brands Scorecard.

Unilever has been named in LinkedIn's Top 3 most sought-after employers across all sectors.

For more information about Unilever and its brands, please visit www.unilever.com. For more information on the USLP: www.unilever.com/sustainable-living/

JOB TITLE: Data Scientist (Azure)
JOB LOCATION: Singapore
TERMS: Contract role (6months)


Expected Work:


The role will be expected to be highly technical, intensively involve in data science project in NRM space staring from Darwin but not limited to. Making sure stable and scalable data pipeline/process/workflows to be created with in NRM COE DS environment (Azure). Create service on top of them to support business operation.

Key Accountabilities:
• Create and own data pipelines and processes
• Model deployment
• Model development

Experience and Qualifications Required:

Professional Skills

• Azure Working Knowledge
• Databrick Working Knowledge
• Python Full Operational
• Spark(PySpark) Working Knowledge
• SparklyR Working Knowledge
• Data model Working Knowledge
• Airflow (or other scheduling tool) Working Knowledge
• Sklearn Working Knowledge
• Database/SQL Working Knowledge

Strong communication skills and ability to work with peers and demonstrate vertical and lateral influence.

Relevant Experience:
• Diploma or B.S. in a relevant technical field.
• Proficiency in Python, Spark (good to have)
• Basic Strong track record in working independently with minimal guidance.

As part of the application process, you will be asked to complete an online assessment consisting of 2 questions.
This is an important part of our application procedure and will take approximately 1 minute of your time to complete. When filled out partially or not at all it may adversely affect the progress of your application.

TO APPLY:

Please apply online by clicking on “Apply Online” below. Your application will be reviewed against our requirements. Should you not meet our immediate requirements, your profile will be registered in our talent pool system and we will match your profile to suitable future vacancies.

You will be able to access your status update through the candidate tracking link.

Thank you for your interest and application.

Make every future a success.
  • Job directory
  • Business directory