facebook-pixel

Lee asks whistleblower whether Facebook ads target teenagers

Frances Haugen says artificial intelligence can miss offensive content and deliver it to young users.

(Matt McClain | Pool) Former Facebook data scientist Frances Haugen speaks during a hearing of the Senate Commerce, Science, and Transportation Subcommittee on Consumer Protection, Product Safety, and Data Security, on Capitol Hill, Tuesday, Oct. 5, 2021, in Washington.

Sen. Mike Lee zeroed in on whether Facebook targets children with advertising during his turn questioning Frances Haugen, the former project manager turned whistleblower for the social media giant.

Haugen appeared before the Senate Commerce subcommittee on Tuesday where she told senators that Facebook’s products, which include Instagram and WhatsApp, are harmful to children.

Lee asked Haugen about a recent experiment from the Technology Transparency Project that was successful in getting a number of controversial ads, including one promoting anorexia, approved by Facebook. The ads were pulled before they ran, but Lee said it suggests some scary conclusions regarding artificial intelligence.

Lee: “One could argue that it proves Facebook is allowing and perhaps facilitating the targeting of harmful adult-themed ads to our nation’s children. Could you explain to me how these ads with a target audience of 13-to-17-year-old children, how would they possibly be approved by Facebook?”

Haugen: “I did not work on the ad approval system. What is resonant to me is Facebook has a deep focus on scale. It’s possible that none of those ads were seen by a human. Facebook’s A.I. system only catches a very tiny minority of offending content. In the case of children, it’s ads like that.”

Haugen revealed her identity during a 60 Minutes interview on Sunday after providing thousands of pages of internal documents about Facebook’s operations. Those documents were the backbone of a series of blockbuster articles in The Wall Street Journal that showed, among other things, company executives knew their products were harming children.

Haugen told Lee it’s important to understand what kind of targeting information is available to advertisers, and what artificial intelligence learns about users.

Haugen: “Let’s imagine you had some text on an ad. It would extract features it thought was relevant for that ad. In the case of something about partying, it would learn partying is a concept. I’m very suspicious of claims that personalized ads are not being delivered to teenagers on Instagram because the algorithms learn correlations. They learn interactions. Those ads may still go to kids interested in partying because Facebook almost certainly has a ranking model in the background that says this person wants more party-related content.”

Lee: “What that says to me is that while they’re saying they’re not targeting teens with those ads, the algorithm might do some of that work for them, which might explain why they collect the data even while they claim they’re not targeting those ads in that way.”

Haugen: “I can’t speak to whether or not that’s the intention. It’s very, very, very difficult to understand these algorithms today. The algorithms unintentionally learn. So it’s very hard to disentangle these factors as long as you have an engagement-based ranking.”