Remember Skynet, the artificial intelligence who wanted to wipe out humanity in the Terminator movies? Here’s an example of AI gone wrong. Fortunately, this will not be the case for us in 2022. Today, AI is not yet so advanced. But the film raises some interesting questions. For example, how do we define ethics when it comes to developing and applying AI?

Anna collard

Here are some of the concerns that AI could go wrong with and which I believe need to be more educated in 2022:

Gender and Racial Bias in AI

According to a Unesco report, only 12% of artificial intelligence researchers and 6% of software developers are women. Women of color are even less represented. The estate is predominantly white, Asian and male. These middle class white men simply cannot be aware of the needs of all of humanity. And the technology they develop is inevitably biased in favor of white, middle-class males.

Because of how machine learning works, when you provide it with biased data, it gets better and better because it is biased. This means that if there is sexism or racism, based on conscious or unconscious biases, embedded in the data, the algorithm will take that model back. We’ve already seen examples of self-driving cars (i.e. driven by AI), disregarding certain ethnicities when deciding how to avoid collisions. Does that make a car racist? Well, not on purpose. The developers simply failed to provide enough qualified training models for the in-car AI to learn. This essentially created a bias that negatively affected his decision-making process.

In his book Invisible woman, Caroline Criado-Perez, explains the impacts of the data gaps where these algorithms, if left unresolved, will have far-reaching consequences exacerbating gender (and racial) inequalities. This highlights the need to raise awareness in society about the negative and positive implications of AI for girls, women and non-binary gender people, as well as the need for greater African representation in the field of AI. AI and global political decision-making around ethics and AI. rules.

Deep counterfeits

Deep Counterfeits – a hybrid of the terms “deep learning” and “fake” are realistic video and audio recordings that use artificial intelligence and “deep” learning to create “fake” content. Technology can replace faces and speech to make it look like someone has said or done something that never happened.

Deep counterfeits, when done well, can be used to create content with an extremely high potential for deception. All you need is a powerful computer and enough existing data about the person you want to replace the original media with to ‘train’ the AI ​​(deep learning algorithms). and to create new realities.

Deep counterfeits have a great future in the film industry, such as reproducing a shot without the actual actor having to be airlifted every time. Or in the medical field, recreating someone’s voice if they lost it.

But deep fakes can also be used for more nefarious purposes. In 2020, the FBI had already warned of a combination of deep forgeries in addition to the highly successful social engineering attack form called Business Email Compromise (Bec).

Effectively harness AI to add credibility to an attack by creating a deep fake audio message masquerading as a legitimate requester, such as the CEO of a company authorizing a fraudulent money transfer (common practice in a Bec attack ).

Disinformation

According to the MIT Journal The spread of real and fake news online, it takes about six times longer for true stories to reach 1,500 people than for false stories to reach the same number of people. This is due to the emotional nature of misinformation, causing readers’ surprise and disgust, and making them more likely to share.

Due to the pandemic, most political meetings are now held virtually, which opens up the possibility of leaking voice and video recordings. These recordings can be very misleading as they lack the crucial context in which a specific comment was made.

Imagine combining these leaked recordings with some deep dummy technology that changes the meaning of what has been said and potentially triggers emotional responses in anyone listening. These clips can then have a powerful and damaging impact while touring on WhatsApp, Telegram or other chat apps. These platforms lend themselves to spreading disinformation because they are not easy to monitor and people are used to trusting voice notes from their groups.

This means that the political opinions of millions of potential voters could be negatively influenced. The South African government has stepped in to try to stop the spread of disinformation by introducing legislation that has made spreading false information a prosecutable offense, but it remains to be seen how it will enforce this.

One organization trying to curb the spread of fake news and disinformation is the Real411 group, which provides a platform for the public to report digital mischief, including disinformation. Special attention is paid to topics such as Covid-19 and during election periods to complaints about elections.

While the positive application of new technologies can be abundant, there are always possibilities for abuse. This is certainly the case with AI and AI related technologies such as deep fakes. This requires new innovative approaches (perhaps the use of DTV to ensure the validity of video content?), Forward-thinking policies as well as greater awareness to effectively prepare our societies for this new reality.