Which programming languages have the happiest (and angriest) commenters? (thanks, Yashar.)
[How to Write Pelican Blog Posts using RMarkdown & Knitr] (http://michaeltoth.me/how-to-write-pelican-blog-posts-using-rmarkdown-knitr.html ) (thanks, Iñaki.)
Last year hundreds of breaches involving millions of health records were reported to the Department of Health and Human Services — with the hackings of the health insurers Anthem and Premera Blue Cross alone affecting some 90 million Americans. At least 10 hospitals and health care systems have had their patient data and information systems literally held for ransom. This month, the national medical lab Quest Diagnostics reported that information on 34,000 patients had been stolen. And these breaches are just the ones that have been disclosed.
The use of facial recognition software for commercial purposes is becoming more common, but, as Amazon scans faces in its physical shop and Facebook searches photos of users to add tags to, those concerned about their privacy are fighting back.
Berlin-based artist and technologist Adam Harvey aims to overwhelm and confuse these systems by presenting them with thousands of false hits so they can’t tell which faces are real.
The Hyperface project involves printing patterns on to clothing or textiles, which then appear to have eyes, mouths and other features that a computer can interpret as a face.
Two employees at the East Lake County Library created a fictional patron called Chuck Finley -- entering fake driver's license and address details into the library system -- and then used the account to check out 2,361 books over nine months in 2016, in order to trick the system into believing that the books they loved were being circulated to the library's patrons, thus rescuing the books from automated purges of low-popularity titles.
Yesterday I wrote a post about the unsurprising discriminatory nature of recidivism models. Today I want to add to that post with an important goal in mind: we should fix recidivism models, not trash them altogether.
The truth is, the current justice system is fundamentally unfair, so throwing out algorithms because they are also unfair is not a solution. Instead, let’s improve the algorithms and then see if judges are using them at all.
The great news is that the paper I mentioned yesterday has three methods to do just that, and in fact there are plenty of papers that address this question with various approaches that get increasingly encouraging results.
Today, we get an answer of sorts thanks to the work of Hemank Lamba at Carnegie Mellon University in Pittsburgh and a few friends. They've studied the nature of selfie deaths and have begun the tricky task of finding a way to warn people when the process of taking a selfie could be dangerous.
What a time to be alive.
- Blue Feed, Red Feed. See Liberal Facebook and Conservative Facebook, Side by Side.
Data Links is a periodic blog post published on Sundays (specific time may vary) which contains interesting links about data science, machine learning and related topics. You can subscribe to it using the general blog RSS feed or this one, which only contains these articles, if you are not interested in any other articles I might publish.
Have you read an article you liked and would you like to suggest it for the next issue? Just contact me!