Thursday, April 24, 2014

ML Meetups



SF Machine Learning Meetup
Thursday, April 24, 2014
Large-Scale Machine Learning with Apache Spark

We'll have a series of events talking about machine learning in Spark.
It's our pleasure to have Xiangrui Meng from Databricks as our first speaker on this series to introduce Spark to data scientists.
For the next meetup on May 1, we will have a join event with Cloudera talking about part2 of Spark, mllib, and large scale multinomial logistic regression implementation in Spark.
In the future, we'll talk about Random Forest implementation in Spark.

Spark is a new cluster computing engine that is rapidly gaining popularity — with over 150 contributors in the past year, it is one of the most active open source projects in big data, surpassing even Hadoop MapReduce. Spark was designed to both make traditional MapReduce programming easier and to support new types of applications, with one of the earliest focus areas being machine learning. In this talk, we’ll introduce Spark and show how to use it to build fast, end-to-end machine learning workflows. Using Spark’s high-level API, we can process raw data with familiar libraries in Java, Scala or Python (e.g. NumPy) to extract the features for machine learning. Then, using MLlib, its built-in machine learning library, we can run scalable versions of popular algorithms. We’ll also cover upcoming development work including new built-in algorithms and R bindings.

Bio:
Xiangrui Meng is a software engineer at Databricks. He has been actively involved in the development of Spark MLlib since he joined. Before Databricks, he worked as an applied research engineer at LinkedIn, where he was the main developer of an offline machine learning framework in Hadoop MapReduce. His thesis work at Stanford is on randomized algorithms for large-scale linear regression.

No comments:

Post a Comment