Newsletter

Up to Speed on AI and Deep Learning: June 22 to July 11

by
Michael Osakwe
,
July 12, 2019
Up to Speed on AI and Deep Learning: June 22 to July 11Up to Speed on AI and Deep Learning: June 22 to July 11
Michael Osakwe
July 12, 2019
On this page

Announcements

  • Google, Sanofi to Use Deep Learning for Healthcare Innovation
    (Health IT Analytics)
    Google and Sanofi have partnered to launch a virtual Healthcare Innovation Lab, which will use deep learning and other data analytics technologies to transform the delivery of medicine and health services.
  • Managing Machine Learning Models the Uber Way
    (Forbes)
    With access to the rich dataset coming from the cabs, drivers, and users, Uber has been investing in machine learning and artificial intelligence to enhance its business. Uber AI Labs consists of ML researchers and practitioners that translate the benefits of the state of the art machine learning techniques and advancements to Uber’s core business. From computer vision to conversational AI to sensing and perception, Uber has successfully infused ML and AI into its ride-sharing platform.
  • Machine Learning helps Microsoft's AI realistically colorize video from a single image
    (VentureBeat)
    Film colorization might be an art form, but it's one that AI models are slowly getting the hang of. In a paper published on the preprint server Arxiv.org ("Deep Exemplar-based Video Colorization "), scientists at Microsoft Research Asia, Microsoft's AI Perception and Mixed Reality Division, Hamad Bin Khalifa University, and USC's Institute for Creative Technologies detail what they claim is the first end-to-end system for autonomous exemplar-based (i.e., derived from a reference image) video colorization.
  • The Raspberry Pi Foundation unveils the Raspberry Pi 4
    (TechCrunch)
    While the Raspberry Pi first started as a simple computer designed to teach kids how to code, it has become a versatile device with many different use cases. The revamped Raspberry Pi 4 has additional functionality and can be purchased with varying sizes of memory.
  • Facebook open-sources DLRM, a deep learning recommendation model
    (Venture Beat)
    Facebook announced the open source release of Deep Learning Recommendation Model (DLRM), a state-of-the-art AI model for serving up personalized results in production environments. DLRM can be found on GitHub, and implementations of the model are available for Facebook’s PyTorch, Facebook’s distributed learning framework Caffe2, and Glow C++.

Research and Tutorials

  • Training a single AI Model can emit as much carbon as five cars in their lifetimes
    (TechnologyReview)
    The artificial-intelligence industry is often compared to the oil industry: once mined and refined, data, like oil, can be a highly lucrative commodity. Now it seems the metaphor may extend even further: like its fossil-fuel counterpart, the process of deep learning has a significant environmental impact.
  • HIRO- Hierarchical Reinforcement learning with Off-policy correction
    (Github Blog)
    Traditional reinforcement learning algorithms are known for their nature of reasoning on the atomic scale; however, it makes them hard to scale to complex tasks. Hierarchical Reinforcement Learning (HRL) introduces high-level abstraction, whereby the agent is able to plan on different scales. This tutorial presents a HRL algorithm designed for goal-directed tasks, in which the agent tries to reach some goal states presented by the author.
  • Human in the Loop: Deep Learning without Wasteful Labelling
    (Oxford Applied and Theoretical Machine Learning Group)
    This tutorial presents BatchBALD: a new practical method for choosing batches of informative points in Deep Active Learning, which avoids labeling redundancies that plague existing methods.
  • Everything you need to know about TensorFlow 2.0
    (Hackernoon)
    This workshop highlights what has changed from the previous 1.x version of TensorFlow. You can follow along with the main topics easily and review the Colab notebook for practical code examples.
  • Weight Agnostic Neural Networks
    (Github Blog)
    This tutorial questions to what extent neural network architectures alone, without learning any weight parameters, can encode solutions for a given task. They also propose a search method for neural network architectures that can already perform a task without any explicit weight training.
  • Drag-and-drop data Analytics
    (MIT)
    For years, researchers from MIT and Brown University have been developing an interactive system that lets users drag-and-drop and manipulate data on any touchscreen, including smartphones and interactive whiteboards. Now, they've included a tool that instantly and automatically generates machine-learning models to run prediction tasks on that data.
  • Deep Learning from the Foundations
    (FastAI)
    A new course on Deep Learning from the Foundations shows how to build a state of the art deep learning model from scratch. It takes you from the foundations of implementing matrix multiplication and back-propagation, through to high-performance mixed-precision training, to the latest neural network architectures and learning techniques, and everything in between.
  • 6 Powerful Open Source Machine Learning GitHub Repositories for Data Scientists
    (Analytics Vidhya)
    The authors of this blog trawl through every open source machine learning release each month and pick out the top developments they feel you should absolutely know. This is an ever-evolving field – and data scientists should always be on top of these breakthroughs. Otherwise, they risk being left behind.
  • Large Scale Adversarial Representation Learning
    (arXiv)
    Adversarially trained generative models (GANs) have recently achieved compelling image synthesis results. But despite early successes in using GANs for unsupervised representation learning, they have since been superseded by approaches based on self-supervision. In this work, the authors show that progress in image generation quality translates to substantially improved representation learning performance.
  • Learning World Graphs to Accelerate Hierarchical Reinforcement Learning
    (arXiv)
    In many real-world scenarios, an autonomous agent often encounters various tasks within a single complex environment. The authors proposed to build a graph abstraction over the environment structure to accelerate the learning of these tasks. Here, nodes are important points of interest (pivotal states) and edges represent feasible traversals between them.
  • How we do things with words: Analyzing text as social and cultural data
    (arXiv)
    In this article, the authors describe their experiences with computational text analysis. They hope to achieve three primary goals. First, aim to shed light on thorny issues not always at the forefront of discussions about computational text analysis methods. Second, hope to provide a set of best practices for working with thick social and cultural concepts. And, third, to help promote interdisciplinary collaborations.
  • Dynamics-Aware Unsupervised Discovery of Skills
    (arXiv)
    Conventionally, model-based reinforcement learning (MBRL) aims to learn a global model for the dynamics of the environment. A good model can potentially enable planning algorithms to generate a large variety of behaviors and solve diverse tasks. However, learning an accurate model for complex dynamical systems is difficult, and even then, the model might not generalize well outside the distribution of states on which it was trained. In this work, the authors combine model-based learning with model-free learning of primitives that make model-based planning easy.
Photo by Robert Geirhos, “Where We See Shapes, AI Sees Textures”

AI and ML in Society

  • Machine Learning Doesn't Introduce Unfairness—It Reveals It
    (DanielMiessler.com)
    Many people have a concern about the use of machine learning in the credit rating and overall FinTech space. The concern is that any service that provides key human services—such as being able to own a home—should be free of bias in its filtering process. The problem is that machine learning improves by having more data, so companies will inevitably search for and incorporate more signals to improve their ability to predict who will pay and who will default. So the question is not whether FinTech will use ML (it will) but rather, how to improve that signal without introducing bias.
  • AI Goes to School
    (Forbes)
    The growth of AI-related jobs far outpaces the increase in the number of interested and capable job seekers. According to an O'Reilly survey, almost half of the enterprises cited skill shortage as a barrier to AI adoption. Early exposure to AI concepts and teaching basic skills is seen as critical to fix the skill shortage; thus, AI is being introduced into the high school classroom.
  • How Conversational Artificial Intelligence is Providing Companionship to the elderly
    (Forbes)
    For the elderly, loneliness often becomes the norm due to bereavement, retirement, redundancy, ill health, and other factors. Looking to address these challenges of the elderly, Accenture has developed a solution that uses conversational AI to let people capture memorable stories for future generations while providing companionship.
  • Where We See Shapes, AI Sees Textures
    (Quanta Magazine)
    To make deep learning algorithms use shapes to identify objects, as humans do, researchers trained the systems with images that had been “painted” with irrelevant textures. The systems’ performance improved, a result that may hold clues about the evolution of our own vision.
  • Making Sure AI Is Socially Responsible
    (Forbes)
    The “AI for good” trend has led to the creation of initiatives such as the Google AI Impact Challenge, identifying and supporting 20 startups creating positive social impact through the application of AI to challenges across a broad range of sectors, from healthcare to journalism, energy to communication, education to environmentalism.
  • Facebook and CMU’s ‘superhuman’ poker AI beats human pros
    (The Verge)
    A program, designed by researchers from Facebook’s AI lab and Carnegie Mellon University, has bested some of the world’s top poker players in a series of games of six-person no-limit Texas Hold ‘em poker. Pluribus, the program, won an average of $5 per hand with hourly winnings of around $1,000 — a “decisive margin of victory,” according to the researchers.
  • Using artificial intelligence to detect discrimination
    (PennState)
    A new artificial intelligence (AI) tool for detecting unfair discrimination — such as on the basis of race or gender — has been created by researchers at Penn State and Columbia University. Preventing unfair treatment of individuals on the basis of race, gender or ethnicity, for example, has been a long-standing concern of civilized societies. However, detecting such discrimination resulting from decisions, whether by human decision makers or automated AI systems, can be extremely challenging.

Join us in two weeks for the next edition of Up to Speed on AI and Deep Learning!

Nightfall Mini Logo

Getting started is easy

Install in minutes to start protecting your sensitive data.

Get a demo