RE.WORK Deep Learning Summit

#TXMS Students attended the RE.WORK Deep Learning Summit – hear from them about the experience and what they’ve gained!

We had the amazing opportunity to attend the 5th Annual RE.WORK Deep Learning Summit in San Francisco held on 24-25th January this year. The conference had the most captivating sessions lined up by researchers and working professionals from organizations like Facebook, Google Brain, Uber AI Labs, Walmart Labs, GE, Samsung, AIRA working extensively to empower the blind and numerous more. The sessions also included workshops which were really constructive in getting us acquainted with alternative tools we can use for our ML oriented projects. This year conference was also special as in addition to Deep Learning and AI Assistant directed sessions, there were numerous other sessions revolving around Environment & Sustainability, Ethics & Social Responsibility, Futurescaping, Investors & Startups and Technical Labs, Education & AI, Industry Applications; 8 new stages that were never seen before for the Deep Learning Summit. (Pictured: MSBA- Serena Du, Avani Sharma, Apoorva Reddy, Atindra Bandi, Sagar Chadha, Akhilesh Narapareddy)

We collected our passes the night before 24th January in a welcoming event kicking off our networking early. The next day we started off from the Deep Learning stage where Anirudh Koul from Aira introduced the distinguished speakers of the subsequent sessions from divisions like Google Brain, OpenAI amongst many others. The most awaited session was in Anirudh’s words by a “smart fellow, humble fellow”, creator of Generative Adversarial Networks (GANs) and Research Scientist at Google Brain IAN GOODFELLOW. He presented his most recent research in Adversarial Machine Learning.

Ian shared how ‘machine learning algorithms are based on optimization: given a cost function, the algorithm adapts the parameters to reduce the cost. Adversarial machine learning is instead based on game theory: multiple “players” compete to each reduce their own cost, often at the expense of other players’.

He explained how we have come a long way in being able to generate completely new images from GANs. We can convert an image of ‘driving in the daytime’ to ‘driving in the night time’ without any labels or supervision. He also talked about the industrial applications of GANs like creating real world objects for use in dentistry (replica false teeth etc.). The representatives of different organizations like Twitter (Ashish Bansal), Walmart Labs amongst others talked about how AI disruption isn’t something that is concerning, due to the inability of machines to adapt and generalize. There were many other interesting sessions on Day 1 one of them being a session by Yixuan Li from Facebook AI. Their problem statement was understanding the mammoth visual content generated by Facebook users to help in connecting their users with things that matter to them the most. The algorithms they ran were so huge in scale that it took them 22 days with about 356 GPUs to process the entire data. Another interesting workshop we attended was a hands on session on PyTorch using a text classification application by Yannet Interian from University of San Francisco in the Connect stage sessions. The Lunch and coffee sessions were very informational as we got to know more about the experiences of working professional and researchers working in the Deep Learning Field. Other topics that we covered through the sessions were Applying ML & NLP in Google ads (Sugato Bose from Google), On- Device Neural Networks for Natural Language Processing (Zornitsa Kozareva from Google) amongst others.

On Day 2, we started off with a morning session by Anirudh Koul from Aira who hosted the deep learning stage the day before. The session revolved around how AI can empower the blind community, he talked about his motivation for the project coming from his late grandfather. He explained how an AI empowered goggle could help the blind users to understand their surroundings better. The App controlled by a camera (on mobile / goggle) would direct the user by describing the visuals that the algorithms could decipher. This session was a part of the Ethics and Responsibility stage. We also attended panel discussions on the Futurescaping Stage on ‘Human-Centric AI: Interpreting and Adjusting to Human Needs in Human-Machine Collaboration’ with panelists from different spaces like Dimitri Kanevsky (Research Scientist at Google), Vinod B.(Data Scientist at Coursera), Dorsa Sadigh ( Phd student at Stanford University). Another session we attended was “Brand is Beyond Logos – Understanding Visual Brand” by Robinson Piramuthu from e-bay where he explained how logos impacted the sensory perceptions of viewers and how that can be signaled using neural networks. Sessions like predicting the on setting of Alzheimers using neural networks made us aware of the breadth and depth to which we could utilize deep learning in different use cases.

The conference was really fruitful in helping us broaden our thought processes for utilization of deep leaning and ML techniques. We got to interact and network with professionals from the Data Science community who shared their experiences and helped us evolve our purview. We ended the memorable experience of attending the RE.WORK Summit by visiting the iconic Golden Gate Bridge and viewing the mesmerizing Pacific!

Leave a Reply

Your email address will not be published. Required fields are marked *