Date

New data analysis competitions

Tech

  • DeepMind announces ethics group to focus on problems of AI.

    Deepmind, Google’s London-based AI research sibling, has opened a new unit focused on the ethical and societal questions raised by artificial intelligence.

    The new research unit will aim “to help technologists put ethics into practice, and to help society anticipate and direct the impact of AI so that it works for the benefit of all”, according to the company, which hit headlines in 2016 for building the first machine to beat a world champion at the ancient Asian board game Go.

    The company is bringing in external advisers from academia and the charitable sector, including Columbia development professor Jeffrey Sachs, Oxford AI professor <a href="https://www.theguardian.com/technology/2016/jun/12/nick-bostrom-artificial-intelligence-machine" Nick Bostrom, and climate change campaigner Christiana Figueres to advise the unit.

  • Machine Learning’s Implications for Fairness and Justice.

    Nowadays, government is armed with algorithms that can forecast domestic violence and employee effectiveness, allowing it to perform its duties more effectively and to achieve correct results more often. But these algorithms can encode hidden biases that disproportionately and adversely impact minorities. What, then, should government consider when implementing predictive algorithms? Where should it draw the line between effectiveness and equality?

    Panelists speaking at the University of Pennsylvania Law School grappled with these questions during the second of four workshops that are part of a larger Optimizing Government Project that seeks to inform the use of machine learning in government. The panel, moderated by Penn Law professor Cary Coglianese, sought to conceptualize fairness and equality and to distill their philosophical and legal implications for machine learning.

  • New Theory Cracks Open the Black Box of Deep Neural Networks.

    [About not knowing how deep neural nets really work] Last month, a YouTube video of a conference talk in Berlin, shared widely among artificial-intelligence researchers, offered a possible answer. In the talk, Naftali Tishby, a computer scientist and neuroscientist from the Hebrew University of Jerusalem, presented evidence in support of a new theory explaining how deep learning works. Tishby argues that deep neural networks learn according to a procedure called the “information bottleneck,” which he and two collaborators <a href="https://arxiv.org/pdf/physics/0004057.pdf" first described in purely theoretical terms in 1999. The idea is that a network rids noisy input data of extraneous details as if by squeezing the information through a bottleneck, retaining only the features most relevant to general concepts. Striking new computer experiments by Tishby and his student Ravid Shwartz-Ziv reveal how this squeezing procedure happens during deep learning, at least in the cases they studied.

Visualizations


Data Links is a periodic blog post published on Sundays (specific time may vary) which contains interesting links about data science, machine learning and related topics. You can subscribe to it using the general blog RSS feed or this one, which only contains these articles, if you are not interested in other things I might publish.

Have you read an article you liked and would you like to suggest it for the next issue? Just contact me!