The ideal candidate has minimum of 5+ year industry experience in data science and programming. The candidate should be able to work independently and be able build data pipeline which might require building own web scraper, working with external teams to develop API or query it from SQL or NoSQL databases. Experience with cloud computing platform such AWS. He should be able to communicate results concisely to the business.
M.Sc. / PhD in computer science, statistics, engineering, mathematics, or physics
Experience in web-analytics and web-technologies
Familiarity with machine learning algorithms both supervised, unsupervised. Familiarity with ML tool-kits such as scikit-learn, h2o, caret.
Good understanding of statistics, knowledge in experimental design is a plus (A/B tests).
Scripting and programming: proficiency in Python, R (good to have), any other language is a plus.
Fluency with Unix/Linux
Working knowledge in query languages SQL and NoSQL (Mongo).
Visualization (ggplot, matplotlib, D3, highcharts, ..)
Eager to explore new territories at the same time keeping focus on delivering quick prototype.
Skilled and fun team to work with.
Improve data pipelines
Build machine-learning models