DailyDispatchOnline

Bringing You the Daily Dispatch

Tech bros need to realise deepfake porn ruins lives – and the law has to catch up
Culture Science

Tech bros need to realise deepfake porn ruins lives – and the law has to catch up

I

Imagine discovering that a stranger has taken your photo from the internet and merged it with a sexually graphic image found online. Or coming across a video of yourself engaging in sexual activity with a person you don’t even know.

Can you imagine the fear of your loved ones or co-workers stumbling upon fabricated content and mistaking it for the real you? And despite your frantic efforts to remove it from social media, the false persona eternally resurfaces and replicates. It’s horrifying to think that these images could circulate indefinitely and there are no consequences for those responsible for creating them.

This terrifying scenario has now become a harsh reality for many individuals around the globe. Recently, the largest pop star was targeted by non-consensual use of deepfake pornography, resulting in the social media platform X banning searches for the artist due to a significant increase in fake explicit images.

However, Taylor Swift is not the only woman to endure this embarrassing, exploitative, and degrading ordeal.

According to the State of Deepfakes report from last year, there was a significant rise in deepfake pornography by a factor of six from the previous year to 2023. It is not surprising that women were the targets in 99% of the documented incidents.

Technology now allows a 60-second deepfake video to be created from a single clear image in under 25 minutes – at no cost. Often using images lifted from private social-media accounts, every day more than 100,000 sexually explicit fabricated images and videos are spread across the web. Referral links to the companies providing these images have increased by 2,408% year on year.

There is no question that the issue of non-consensual deepfake pornography has evolved into a significant violation of human rights. However, what measures can be implemented to prevent this rapidly expanding industry from perpetuating identity theft and causing harm to individuals?

A head and shoulders picture of blonde woman

Display picture in full screen mode.

The UK has taken steps to make it illegal to share deepfakes, although this does not apply to their creation. Some laws have been implemented to make search engines and online platforms more responsible, but they are not comprehensive enough.

Currently, there is no existing protection in the United States for victims of non-consensual image sharing. However, a bill that was supported by both Democrats and Republicans was recently proposed in the Senate. This bill would give victims the ability to take legal action against individuals involved in the making and sharing of these images.

While implementing laws to make the production and distribution of nonconsensual sexual deepfakes a criminal offense is undoubtedly necessary, it alone is insufficient. The entire system that allows these activities to occur must also be held accountable.

Experts who specialize in images generated through AI agree that in order to prevent the widespread use of sexual deepfakes, companies within social media, search engine, payment processing, domain registration, security, and cloud computing industries must take action against those responsible for creating the videos by targeting their financial resources.

Sophie Compton initiated the #MyImageMyChoice movement, which aims to combat deepfake content, and leads Another Body, a documentary scheduled for release in 2023 that follows female students pursuing justice after being targeted by nonconsensual deepfake pornography. According to Compton, search engines play a crucial role in preventing this type of abuse.

Nonetheless, Prof Hany Farid, an expert in digital image forensics from the University of California, Berkeley, believes that the individuals benefiting from the exploitation of women through deepfakes will not take action. He asserts that their lack of moral integrity will compel them to ignore this issue in favor of financial gain, unless they are compelled to behave differently.

As an expert in promoting gender equality, I believe there is a deeper and more systemic issue at hand.

According to my findings, AI companies and engineering schools with mostly male employees tend to cultivate a culture that lacks empathy towards the challenges women face online, particularly in regards to the harmful effects of sexual deepfakes on survivors. As a result, there is little effort being put towards combatting the rise of non-consensual sexual imagery.

The eyes of what appears to be a young woman seen at the top of a mobile phone with the words "public service announcement" and "create AI girls" written across the picture

See the image in full screen.

According to a recent study, the tech industry is experiencing a rise in gender discrimination, as it is largely dominated by men. In the United States, women make up only 28% of tech professionals and a mere 15% of engineering positions.

During the interview with Compton regarding her research on the industry of nonconsensual sexual abuse, she mentioned observing the ongoing oppression of women in online platforms often frequented by engineering students involved in AI technology. She also mentioned that the women she followed for her documentary shared experiences of regularly hearing jokes about pornography, observing individuals spending excessive amounts of time online (specifically on platforms like 4chan), and feeling a sense of superiority towards conventional norms and women.

The prevalent belief that no harm is caused by fake images perpetuates the idea that they are harmless. However, this notion is far from reality. It is crucial that we prioritize providing aid for victims and implementing reliable measures to prevent and delete non-consensual sexual deepfakes.

In the time that it has taken to read this article, hundreds of nonconsensual new images or videos of women will have been uploaded to the internet, potentially tearing lives apart, and nothing is being done to stop it.

The use of Generative AI advancements could result in an alarming increase in the exploitation of women, at a rapid pace and on a large scale. Currently, there are countless women in need of assistance. Without intervention from governments, regulators, and businesses, the adverse effects on women globally could be monumental.

Luba Kassova is an advocate, journalist, and advisor for achieving gender equality.

Source: theguardian.com