New data analysis competitions
- On Kaggle, Data Science For Good: DonorsChoose.org.
Privacy
-
Law enforcement agencies have embraced facial recognition. And contractors have returned the embrace, offering up a variety of "solutions" that are long on promise, but short on accuracy. That hasn't stopped the mutual attraction, as government agencies are apparently willing to sacrifice people's lives and freedom during these extended beta tests.
The latest example of widespread failure comes from the UK, where the government's embrace of surveillance equipment far exceeds that of the United States. Matt Burgess of Wired obtained documents detailing the South Wales Police's deployment of automated facial recognition software. What's shown in the FOI docs should worry everyone who isn't part of UK law enforcement. (It should worry law enforcement as well, but strangely does not seem to bother them.)
-
Senator Wyden Demands Answers from Prison Phone Service Caught Sharing Cellphone Location Data.
Do you use Verizon, AT&T, Sprint, or T-Mobile? If so, your real-time cell phone location data may have been shared with law enforcement without your knowledge or consent.
How could this happen? Well, a company that provides phone services to jails and prisons has been collecting location information on all Americans and sharing it with law enforcement—with little more than a “pinky promise” from the police that they’ve obtained proper legal process.
Tech
-
Google Duplex: An AI System for Accomplishing Real-World Tasks Over the Phone.
Today we announce Google Duplex, a new technology for conducting natural conversations to carry out “real world” tasks over the phone. The technology is directed towards completing specific tasks, such as scheduling certain types of appointments. For such tasks, the system makes the conversational experience as natural as possible, allowing people to speak normally, like they would to another person, without having to adapt to a machine.
-
How can we be sure AI will behave? Perhaps by watching it argue with itself.
The concept comes from researchers at OpenAI, a nonprofit founded by several Silicon Valley luminaries, including Y Combinator partner Sam Altman, LinkedIn chair Reid Hoffman, Facebook board member and Palantir founder Peter Thiel, and Tesla and SpaceX head Elon Musk.
The OpenAI researchers have previously shown that AI systems that train themselves can sometimes develop unexpected and unwanted habits. For example, in a computer game, an agent may figure out how to “glitch” its way to a higher score. In some cases it may be possible for a person to supervise the training process. But if the AI program is doing something impossibly complex, this might not be feasible. So the researchers suggest having two systems discuss a particular objective instead.
Visualizations
Data Links is a periodic blog post published on Sundays (specific time may vary) which contains interesting links about data science, machine learning and related topics. You can subscribe to it using the general blog RSS feed or this one, which only contains these articles, if you are not interested in other things I might publish.
Have you read an article you liked and would you like to suggest it for the next issue? Just contact me!