Palantir began work with the LAPD in 2009. The impetus was federal funding. After several Sept. 11 postmortems called for more intelligence sharing at all levels of law enforcement, money started flowing to Palantir to help build data integration systems for so-called fusion centers, starting in L.A. There are now more than 1,300 trained Palantir users at more than a half-dozen law enforcement agencies in Southern California, including local police and sheriff’s departments and the Bureau of Alcohol, Tobacco, Firearms and Explosives.
The LAPD uses Palantir’s Gotham product for Operation Laser, a program to identify and deter people likely to commit crimes. Information from rap sheets, parole reports, police interviews, and other sources is fed into the system to generate a list of people the department defines as chronic offenders, says Craig Uchida, whose consulting firm, Justice & Security Strategies Inc., designed the Laser system. The list is distributed to patrolmen, with orders to monitor and stop the pre-crime suspects as often as possible, using excuses such as jaywalking or fix-it tickets.
EarthNow to Deliver Real-Time Video via Large Satellite Constellation. Please note that this is just a press release, but the short-term implications are obvious. Vía IP list.
EarthNow LLC announces intent to deploy a large constellation of advanced imaging satellites that will deliver real-time, continuous video of almost anywhere on Earth.
For the first time, the US Food and Drug Administration has approved an artificial intelligence diagnostic device that doesn’t need a specialized doctor to interpret the results. The software program, called IDx-DR, can detect a form of eye disease by looking at photos of the retina.
The Echo Look won’t tell you why it’s making its decisions. And yet it purports to show us our ideal style, just as algorithms like Netflix recommendations, Spotify Discover, and Facebook and YouTube feeds promise us an ideal version of cultural consumption tailored to our personal desires. In fact, this promise is inherent in the technology itself: Algorithms, as I’ll loosely define them, are sets of equations that work through machine learning to customize the delivery of content to individuals, prioritizing what they think we want, and evolving over time based on what we engage with.
Confronting the Echo Look’s opaque statements on my fashion sense, I realize that all of these algorithmic experiences are matters of taste: the question of what we like and why we like it, and what it means that taste is increasingly dictated by black-box robots like the camera on my shelf.
This AI Will Turn Your Dog Into a Cat. This is anecdote. The important thing, for a Spaniard living in Canada, is the reference within the article to the original system, which was used to transform winter images into summer ones.
As detailed in a paper published to arXiv, the neural net is actually a generative adversarial network (GAN), which is a way of training a machine learning algorithm without human supervision. In GANs, two neural nets are pitted against one another: One neural net generates new images and tries to trick the other neural net into thinking the images are real. If the other neural net is able to tell the generated images are false, the neural net generating the images keeps tweaking its algorithm until it generates images that are nearly indistinguishable from the real deal.
In this case, the neural nets were first fed images of dogs. A neural net would take the dog image, break it down into code describing various features of the dog photo and then combine these features with various “style codes” derived from features taken from an image of a cat. Each iteration of this mashup of dog features and cat features would result in a hybrid image, which would then be fed to a neural network tasked with determining whether the image was genuine.
Data scientists in academia and industry are increasingly recognizing the importance of integrating ethics into data science curricula. Recently, a group of faculty and students gathered at New York University before the annual FAT* conference to discuss the promises and challenges of teaching data science ethics, and to learn from one another’s experiences in the classroom. This blog post is the first of two which will summarize the discussions had at this workshop.
Data Links is a periodic blog post published on Sundays (specific time may vary) which contains interesting links about data science, machine learning and related topics. You can subscribe to it using the general blog RSS feed or this one, which only contains these articles, if you are not interested in other things I might publish.
Have you read an article you liked and would you like to suggest it for the next issue? Just contact me!