Ethics of AI: Video from the Alberta Party raises concerns

By Carly Robinson

The Alberta Party is apologizing for a since-deleted video, hoping their mistake can be a learning opportunity for others.

The controversy had nothing to do with the script delivered by a man in front of a superimposed image of Calgary, but rather the fact the man wasn’t real.

Party leader Barry Morishita says the video was generated using the artificial intelligence program Synthesia, where users can have a script read out by an avatar or “spokes-bot”.

“We’re always trying to start a thoughtful conversation based on policy and positions rather than personalities.” Morishita tells CityNews “So we thought, ‘why not give this a try?”

But as soon as the video was posted, comments questioning the party for using an AI video to share the message started flooding in. The video was soon deleted, but later re-posted to Twitter with an explanation of what happened.

“When you take risks, you can step into some stuff,” says Morishita. “That’s what happened, we stepped into it a bit, but we want to be fully transparent about what we did.”

This is an example of why organizations need to think about the ethical dilemmas that come with using new tech before they hit “post”, according to the CEO of Edmonton-based Ethically Aligned AI.

“We get pushed into these new technologies, and new situations,” says Katrina Ingram, “and we don’t really know anymore what’s appropriate and what’s not appropriate.”

The first thing Ingram tells anyone asking about AI ethics is to “be transparent and really identify when you are using AI in a process or a project.”

As more programs targeting the general public hit the market, Ingram says it’s the perfect time for organizations to come up with an AI plan.

She says however, it can go beyond videos: image-generating apps like Lensa use AI to create artwork, and ChatGPT launched in November and allows anyone to input prompts and generate text, from an essay to an email.

“Increasingly it becomes very blurred. It’s hard to really tell the difference. And that’s why transparency is really such a core principle when it comes to the use of AI systems,” said Ingram.

She says she’s concerned about the trend of posting something generated by AI to see if anyone will notice without thinking of the consequences.

“We don’t do that for other things, we don’t do that with medical devices, we don’t do that with pharmaceuticals. Why are we doing that with technology?” Ingram noted.

“I think we need to really consider where we are at in this brave new world of AI-enabled technology, and really think about this idea of not launching it into the public sphere until we have real consideration of the impact.”

Top Stories

Top Stories

Most Watched Today