New
Clarifai is recognized as a Leader in The Forrester Wave™: Computer Vision Tools, Q1 2024
December 11, 2018

NeurIPS 2018: Clarifai Research Scientist Perspectives

Table of Contents:

The Conference on Neural Information Processing Systems (NeurIPS, formerly called NIPS) is THE machine learning and computational neuroscience research conference held yearly in December. Over 8,000 attendees and 1,000+ academic papers were at NeurIPS 2018, showcasing the latest research-based developments in deep learning. New algorithms, concepts, practices, experiments, and ideas are shared and discussed in a grand meeting of the AI minds!

I spoke with two of Clarifai's dedicated research scientists, Yanan Jian, and Sam Dodge, who went to Montreal from San Francisco for this year's event.

Tell us a bit about yourselves and what you do at Clarifai!

yanan jianYanan: I've been doing research at Clarifai from our San Francisco office for just over a year and a half. For the past year, I've been mainly focusing on computer vision related research/products, but more recently, I've started looking into projects that sit in between natural language processing (NLP) and vision.

 

sam dodgeSam: I recently finished my Ph.D. and have been a Senior Research Scientist at Clarifai in San Francisco for about half a year now.

 

 

 

 

Was this your first time at NeurIPS? What excited you about this year’s conference?

 Yanan: This is my second time. I went to last year's conference which took place in LA (and compared to Montréal, the weather in LA was much nicer.) There are three things that I wanted to get out of NeurIPS this year:

1. Get inspired by works done that can be applied to projects within Clarifai.

2. Discover works that can be applied in industry and, at the same time, have strong insights. NeurIPS is a great place to discover these works.

3. To learn in domains that I've not worked in before.

Sam: This was my first time attending NeurIPS, so I was super excited. The community is slightly different from previous computer vision conferences that I have attended, so it was a good opportunity to see recent advancements in areas outside of computer vision (like natural language processing, graph neural networks, etc.)


What were some of the trends you noticed during the event?

Yanan: I don't know if it is 'trend,' but I saw more code releases in the NeurIPS community. Making results reproducible not only makes the research process faster and more trustworthy, it also makes it easier to be adopted by industries and have a bigger impact. I also think there are more research papers on reinforcement learning (RL) this year.

Sam: There is a lot of work trying to understand why certain things in deep learning work. For example, there were at least 3 papers that analyzed and tried to provide new explanations for batch normalization (a technique for improving the performance and stability of artificial neural networks). Each of the explanations was different than the original explanation for why it works. It is very useful that the community is trying to go beyond empirical results and provide better theories for why deep nets work.


What were key learnings you took away from this year’s event? Any favorite panels or speakers?

 Yanan: One key thing I learned is when getting into a field, don't take the existing evaluation methods as granted. My favorite speaker was Joelle Pineau, whose talk, “Reproducible, Reusable, and Robust Reinforcement Learning,” criticizes RL with regard to concerns like robustness, reproducibility, and reusability.Though people are crazy about RL and we've seen more work on it, it has flaws and people did not report failure cases with RL. Analysis of failure cases is important.

Another talk that I liked was "Towards real-world visual reasoning" with Christopher Manning. This talk presented failure cases in visual reasoning and concluded there are limitations on current reasoning datasets “Current reasoning datasets only have a very small space of possible objects and attributes and high capacity models may memorize all combinations, reducing effective compositionality." 

Sam: There were lots of interesting talks and posters at NeurIPS. One paper, "Sanity Checks for Saliency Maps," showed that existing methods for computing saliency maps from neural networks give the same results for a random network as a fully trained network. So the saliency methods may not actually give useful information for interpretability (Saliency methods have become a popular way to “highlight features in an input deemed relevant for the prediction of a learned model”).

Also, one of the Best Paper award winners, "Neural Ordinary Differential Equations," described a network that incorporates a differentiable ordinary differential equation (ODE) solver in a neural network. This allows the network to have adjustable depth and model complex continuous functions.

 

Any favorite moments from the trip? 

Yanan: Watching JT (one of our Senior Research Scientists) wearing shorts and flip flops walking in the snow, in -10°C (14°F) temperatures. According to him, it is not bad! I also really enjoyed learning basic French from my Uber driver once I landed in Montréal. At least, I learned how to pronounce my hotel name!

Sam: I also vote for JT in the snow in flip-flops. There were 5000+ people at the conference, and I don't think anyone else was brave enough to wear shorts and flip-flops in the snow!

Clarifai Sr. Research Scientist JT is unintimidated by the Montreal cold at NeurIPS 2018

Clarifai Sr. Research Scientist JT Turner, unfazed by the Montreal cold at NeurIPS 2018

Want to read more about what Clarifai got up to at this year's NeurIPS? Check out our recent recap of CEO Matt Zeiler's panel discussion, on commercializing AI research.

Ready to take advantage of all these advances in deep learning for computer vision? Clarifai specializes in incorporating the latest research techniques so that you get all the benefit with none of the effort! Sign up for a free account to give our technology a spin, totally free to try for up to 5,000 operations per month.

New call-to-action