Confusing an Algorithm

[ad_1]

Awesome, not awesome.

#Awesome
“A team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Massachusetts General Hospital (MGH) has created a new deep learning model that can predict from a mammogram if a patient is likely to develop breast cancer in the future. They trained their model on mammograms and known outcomes from over 60,000 patients treated at MGH, and their model learned the subtle patterns in breast tissue that are precursors to malignancy. MIT professor Regina Barzilay, herself a breast cancer survivor, says that the hope is for systems like these to enable doctors to customize screening and prevention programs at the individual level, making late diagnosis a relic of the past.” — Regina Adam Conner Simons and Rachel Gordon, Communications Learn More from CSAIL >

#Not Awesome
“The EU’s online-terrorism bill, Khatib noted, sends the message that sweeping unsavory content under the rug is okay; the social-media platforms will [use machine learning algorithms] see to it that nobody sees it. He fears the unintended consequences of such a law — that in cracking down on content that’s deemed off-limits in the West, it could have ripple effects that make life even harder for those residing in repressive societies, or worse, in war zones. Any further crackdown on what people can share online, he said, “would definitely be a gift for all authoritarian regimes. It would be a gift for Assad.”” — Drew Bernhard Warner, Journalist Learn More from The Atlantic >

What we’re reading.

1/ People who share random details about themselves to confuse Facebook’s algorithms are exposed to “unfiltered, randomized extreme[s]…delight, danger, and drudgery…” Learn More from The Atlantic >

2/ Mark Cuban invests in a company that plans to help police departments scan people’s faces and search through the database for individuals that meet certain gender, ethnicity, and emotional descriptions. Learn More from Vice >

3/ The countries leading in AI research for military applications are building lethal autonomous tools that currently require human intervention, but could one day make killing decisions without it. Learn More from Axios >

4/ AI researchers try to build algorithms that can resist “adversarial examples” that are used to intentionally confuse AI technology that performs critical tasks — like scanning luggage at airports and detecting hate speech on online platforms. Learn More from WIRED >

5/ Utility companies are increasingly using AI technologies to predict equipment failures before they cause massive environmental disasters — like California’s deadly wildfires. Learn More from Axios >

6/ AI algorithms that infer a user’s genders based on purchase/browsing history can reinforce norms that are harmful to people of all genders. Learn More from A List Apart >

7/ Religious leaders begin to take stances on how they believe AI should and should not be used — and areas in which they believe it interferes with the “exclusive responsibility of humans.” Learn More from The Wall Street Journal >

[ad_2]

This article has been published from the source link without modifications to the text. Only the headline has been changed.

Source link

Most Popular