Over the past 12 months, consumer AI capabilities have soared in terms of technology and tangibility. As creators and communicators it is both an exciting and scary new frontier. The implications are so vast for so many industries, so we are only focusing on how it’s going to affect what we do day-to-day.
First and foremost, AI is a tool and thankfully it still requires the human touch. The problem isn’t in those tools themselves, but the ethics of the humans behind them.
We’re only at the start of our understanding of these tools, so that’s why it’s so important to think about the implications and opportunities of using them ethically. To do this we’ve split this into two sections, Forefront AI and AI Enhancements.
1. AI Avatars: AI which is used to create digital humans or avatars.
Transparency
Remember the days when influencers didn’t have to write #AD on adverts leading to accusations of deliberately misleading consumers?
While most of us can tell AI generated content from a mile off, there will be a time where it becomes so sophisticated that it will be indistinguishable from filmed / animated content.
It feels like a simple hashtag ‘#AI’ watermarked/baked in will help diminish that problem by making it clear to audiences that the content they are watching has been manipulated or enhanced by using AI tools.
This will also help combat the rise of AI generated scams which have been growing in recent months.
This doesn’t have to be used everytime AI has been used to enhance footage (for example when fixing eyelines) only when AI has been used in a way which could be interpreted as misleading.
Misleading audiences in campaigns
One great concern is using AI to deliberately mislead and influence audiences. For example, recently a voice clone of Keir Starmer went viral on Twitter claiming to be a ‘leaked’ clip of him berating one of his aides.
This was generated to deliberately mislead and highlights how quickly public outrage can spread. This is different however to when Channel 4 had a deep fake of the Queen delivering an alternative Christmas Day Speech. It was clearly labeled as using technology to manipulate someone’s image and wasn’t intended to mislead.
Another example is when Just Stop Oil created an AI of Rishi Sunak explaining why he’s screwing over the planet. This wasn’t made to mislead, it’s clearly a joke and parody (there is not much difference between this and a Rishi lookalike). However, it would have been extra safe to include #AI watermarked in the top corner of the video.
Because we don’t want to undermine democracy and plunge the world into confusion chaos, we believe in four things which should be the standard to avoid this:
1) AI to not be used to deliberately mislead audiences.
2) When AI is being used to manipulate someone’s image, clearly label it as so.
3) Using political figures in deep fakes is ok when it is clearly a parody or to make a point of the dangers of these types of technologies.
4) Using non-political figures (for example actors, people who would miss out on being paid by using AI) is a no go in campaigns without their consent. For example, using Ryan Renyolds’ likeness/voice to promote the Republican party without their consent.
Using AI for advertising purposes
Using AI to promote products is a whole different beast. For example, recently a dentistry used a deepfake of Tom Hanks to promote their business without his consent and used paid media to boost the post.
As with campaigns, we believe that using someone’s likeness without their consent is a no go area. A lot of the work we do for businesses rely on authenticity and trust, so therefore using a spokesperson without their knowledge is not a good look.
For example if we used a deepfake of Daniel Craig to promote Ethical Banking, audiences would feel misled and feel like they can’t trust the product/business.
2. AI Enhancements: AI tools which are used to create and enhance behind the scenes.
Due to the size of our current team, we believe that using AI to artwork and create concepts for pitches are justified. This is not replacing any of our team’s jobs but instead gives an efficient and more cost effective way of expressing our creativity.
Using AI also helps us to explain and visualize our concepts to VFX artists and graphic designers and craft a better result.
AI artworking in Production
We totally understand the concerns from designers of their roles being under threat from powerful AI tools, we share these concerns but believe that VFX artists and designers provide a much better, more bespoke job than what AI can produce at this current time and when budget is available, we will always go for that human touch.
However, a handful of our work is supporting organizations who are punching above their weight to combat the climate crisis. Due to this the budgets we are working with are sometimes lower and can be aided by AI artworking when budget for more bespoke VFX and designers is not available.
TLDR, We will always strive to use the human touch but sometimes due to budget limitations, we will need to use AI tools for visualisation.
AI as a fix
One thing we are already using AI tools for is to fix footage to avoid a costly reshoot. For example, sometimes when interviewing people, their eyeline can drift at a crucial moment. This happens way more than you would think but thankfully AI tools have been able to help fix these eyelines to avoid it looking like someone is reading and instead looking straight at the camera.
Another example is we have used AI tools to fix broken audio and avoid re-recording. These two uses of AI have been extremely helpful in our post-production process but don’t work to mislead audiences, only to enhance or fix the human work we’ve already done.
Because this is very behind the scenes, technical wizardry and not negatively affecting people’s work / misleading audiences, we don’t feel like we need to disclose the use of AI in these use cases.
The future of AI?
The world of AI is a brave new frontier and we are only at the beginning of this journey so we welcome to be challenged on our thoughts.
But as a closing thought, the author of internet law case book, James Grimmelmann said that ‘AI is going to be more disruptive and dramatic than the internet’ and with no laws currently in place, it really is the wild west out there.
Therefore, we will constantly be reviewing how this technology grows but believe the ethical thing for creators to do is to adhere to legacy laws we already have such as copyright law, privacy law, defamation law etc and to be transparent to when and how we are using AI technologies until there is proper regulation in place.