Data analysis competitions
- On Kaggle: TalkingData AdTracking Fraud etection Challenge. Up to $25,000 in prizes.
[PDF, academic paper] Is writing style predictive of scientific fraud?.
The ACLU has compiled a list of tech tools that are being used by local police departments ostensibly for crimefighting—and it’s long. Police departments increasingly own cell site simulators, or stingrays, to hack into cell phones. They can read license plates and monitor E-ZPass token usage to create a detailed picture of a person’s driving route. In addition to Closed Circuit Television cameras, light aircrafts and light bulbs with surveillance capacity are giving Jane Jacobs’ idea of “eyes on the street” a new, more insidious meaning. Some new technology can enable police to even see through solid barriers—like car doors an house walls.
[...] In light of all this, the ACLU launched the Community Control Over Police Surveillance (CCOPS) campaign in 2016, urging city governments to pass an ordinance that ensures a public debate before any such technologies are adopted. It’s along the lines of efforts to curb the proliferation of military equipment at the local level. So far, Seattle; Nashville; Somerville, Massachusetts; and Santa Clara County, California; have already passed such legislation. Nineteen other local governments (including two states) are considering similar laws.
As artificial intelligence (AI) and big data technologies become more prevalent, a survey has found that three out of four people in China are worried about the threat that AI poses to their privacy, challenging the popular notion that the Chinese care little about giving up personal data.
State broadcaster China Central Television (CCTV) and Tencent Research surveyed 8.000 respondents on their attitudes toward AI as part of CCTV’s China Economic Life Survey. The results show that 76.3 per cent see certain forms of AI as a threat to their privacy, even as they believe that AI holds much development potential and will permeate different industries. About half of the respondents said that they believe AI is already affecting their work life, while about a third see AI as a threat to their jobs.
In this piece, we propose three goals for developing future policy on AI and national security: preserving U.S. technological leadership, supporting peaceful and commercial use, and mitigating catastrophic risk. By looking at four prior cases of transformative military technology – nuclear, aerospace, cyber, and biotech – we develop lessons learned and recommendations for national security policy toward AI.
It’s the night of September 30, 1840. A light-skinned man with long blond hair and brown eyes lies in a makeshift bed in a distant campsite. A female speech synthesizer tells me that this man has no parents nor any friends or enemies. His name is Jonathan Patience and he is a computer-generated fictional character living in Sheldon County number 1,515,459,035. I am listening to this description of Jonathan’s life on a podcast called Sheldon County. Its host and creator is an artificial intelligence named SHELDON.
SHELDON was created by University of California, Santa Cruz PhD student James Ryan as part of his thesis. It sifts through the experiences of characters living within procedurally generated American counties and creates storylines based on their experiences. The characters in these counties have their own lives and make their own decisions. They interact with one another and even possess unique value systems and goals. SHELDON then turns these stories into a narrative driven, Twin Peaks-inspired podcast called Sheldon County.
Google has partnered with the United States Department of Defense to help the agency develop artificial intelligence for analyzing drone footage, a move that set off a firestorm among employees of the technology giant when they learned of Google’s involvement.
Google’s pilot project with the Defense Department’s Project Maven, an effort to identify objects in drone footage, has not been previously reported, but it was discussed widely within the company last week when information about the project was shared on an internal mailing list, according to sources who asked not to be named because they were not authorized to speak publicly about the project.
Some Google employees were outraged that the company would offer resources to the military for surveillance technology involved in drone operations, sources said, while others argued that the project raised important ethical questions about the development and use of machine learning.
The vast majority of Americans expect artificial intelligence to lead to job losses in the coming decade, but few see it coming for their own position.
And despite the expected decline in employment, the public widely embraces artificial intelligence in attitude and in practice. About five in six Americans already use a product or service that features it, according to a survey that was conducted last fall and from which new findings were released on Tuesday.
[R]esearchers at Chinese search giant Baidu have created an A.I. they claim can learn to accurately mimic your voice — based on less than a minute’s worth of listening to it.
“From a technical perspective, this is an important breakthrough showing that a complicated generative modeling problem, namely speech synthesis, can be adapted to new cases by efficiently learning only from a few examples,” Leo Zou, a member of Baidu’s communications team, told Digital Trends. “Previously, it would take numerous examples for a model to learn. Now, it takes a fraction of what it used to.”
In January, a leading machine-learning conference announced that it had selected 11 new papers to be presented in April that propose ways to defend or detect such adversarial attacks. Just three days later, first-year MIT grad student Anish Athalye threw up a webpage claiming to have “broken” seven of the new papers, including from boldface institutions such as Google, Amazon, and Stanford. “A creative attacker can still get around all these defenses,” says Athalye. He worked on the project with Nicholas Carlini and David Wagner, a grad student and professor, respectively, at Berkeley.
From the article: Audio Adversarial Examples.
We have constructed targeted audio adversarial examples on speech-to-text transcription neural networks: given an arbitrary waveform, we can make a small perturbation that when added to the original waveform causes it to transcribe as any phrase we choose.
In prior work, we constructed hidden voice commands, audio that sounded like noise but transcribed to any phrases chosen by an adversary. With our new attack, we are able to improve this and make an arbitrary waveform transcribe as any target phrase.
- On the Italian elections: Hung country.
Data Links is a periodic blog post published on Sundays (specific time may vary) which contains interesting links about data science, machine learning and related topics. You can subscribe to it using the general blog RSS feed or this one, which only contains these articles, if you are not interested in other things I might publish.
Have you read an article you liked and would you like to suggest it for the next issue? Just contact me!