• Apple is sharing your face with apps. That’s a new privacy worry.

    Poop that mimics your facial expressions was just the beginning.

    It’s going to hit the fan when the face-mapping tech that powers the iPhone X’s cutesy “Animoji” starts being used for creepier purposes. And Apple just started sharing your face with lots of apps.


  • Future wars may depend as much on algorithms as on ammunition, report says.

    The Pentagon is increasingly focused on the notion that the might of U.S. forces will be measured as much by the advancement of their algorithms as by the ammunition in their arsenals. And so as it seeks to develop the technologies of the next war amid a technological arms race with China, the Defense Department has steadily increased spending in three key areas: artificial intelligence, big data and cloud computing, according to a recent report.

  • Hedge funds embrace machine learning—up to a point.

    Artificial intelligence (AI) has already changed some activities, including parts of finance like fraud prevention, but not yet fund management and stock-picking. That seems odd: machine learning, a subset of AI that excels at finding patterns and making predictions using reams of data, looks like an ideal tool for the business. Yet well-established “quant” hedge funds in London or New York are often sniffy about its potential. In San Francisco, however, where machine learning is so much part of the furniture the term features unexplained on roadside billboards, a cluster of upstart hedge funds has sprung up in order to exploit these techniques.

  • News AI Professor Details Real-World Dangers of Algorithm Bias.

    However quickly artificial intelligence evolves, however steadfastly it becomes embedded in our lives—in health, law enforcement, sex, etc.—it can’t outpace the biases of its creators, humans. Kate Crawford, a Microsoft researcher and co-founder of AI Now, a research institute studying the social impact of artificial intelligence, delivered an incredible keynote speech, titled “The Trouble with Bias,” at Neural Information Processing System Conference on Tuesday. In Crawford’s keynote, she presented a fascinating breakdown of different types of harms done by algorithmic biases.

    Here's the video. I could only find a copy hosted on Facebook.

Data Links is a periodic blog post published on Sundays (specific time may vary) which contains interesting links about data science, machine learning and related topics. You can subscribe to it using the general blog RSS feed or this one, which only contains these articles, if you are not interested in other things I might publish.

Have you read an article you liked and would you like to suggest it for the next issue? Just contact me!