Stanford University research under ethical review after falsely claiming that artificial intelligence could determine a person’s sexual orientation



You Might Like

Videos | Dating

Live Cams | Live Chats
 


Stanford University research under ethical review after falsely claiming that artificial intelligence could determine a person’s sexual orientation

NEW YORK – GLAAD, the world’s largest LGBTQ media advocacy organization, today applauded The Journal of Personality and Social Psychology’s decision to place Stanford University research claiming artificial intelligence (AI) could determine a person’s sexual orientation using facial recognition under an ethical review.

According to The Outline, The Journal of Personality and Social Psychology is re-examining the research. “An ethical review is underway right at this moment,” one of the journal’s editors, Shinobu Kitayama, confirmed to the outlet.

“Academic freedom is a right of research professionals, but hyperbolic research claims around being LGBTQ can put people in harm’s way,” said Jim Halloran, GLAAD’s Chief Digital Officer. “The research identified a pattern of physical traits of a small subset of photos that people uploaded to internet dating sites, not that facial recognition could detect if someone is gay or lesbian. Months ago, Stanford University and the researchers invited LGBTQ organizations to provide feedback on this research, but when concerns were raised, they acknowledged the limitations of the research and recklessly proceed.”

Last week, a professor affiliated with Stanford University published a research study that resulted in several media outlets wrongfully suggesting that AI can be used to detect sexual orientation. GLAAD and HRC urged media who either covered the study or plan to in future coverage to include the myriad of flaws in the study’s methodology — including that it made inaccurate assumptions and categorically left out any non-white subjects.

GLAAD and HRC called attention to the following points included in the research:

  • The study did not look at any non-white individuals.
  • The study did not independently verify crucial information including age and sexual orientation, and took at face value information appearing in online dating profiles.
  • The study assumed there was no difference between sexual orientation and sexual activity, which is incorrect.
  • The study assumed there were only two sexual orientations — gay and straight — and does not address bisexual individuals.
  • The research states: “Outside the lab, the accuracy rate would be much lower” (the lab = certain dating sites) and is 10 points less accurate for women.
  • The study claims to detect gay men from the pool of photos on the dating sites with 81% accuracy. Even if this were true given the aforementioned flaws, it still means that heterosexual men could therefore be identified as gay nearly 20% of the time.
  • The study reviewed superficial characteristics in the photos of out gay men and women on dating sites such as weight, hairstyle and facial expression.

Stanford University and the researchers hosted a call with GLAAD and HRC several months ago in which both organizations raised these myriad concerns and warned against overinflating the results or the significance of them. There was no follow-up after the concerns were shared and none of these flaws have been addressed.

Based on this information, media headlines that claim AI can tell if someone is gay by looking one photo of your face are factually inaccurate.

###

 

September 12, 2017
Issues: 
Tags: 

www.glaad.org/blog/stanford-university-research-under-ethical-review-after-falsely-claiming-artificial


You Might Like

Videos | Dating

Live Cams | Live Chats