Artificial intelligence lab OpenAI has published a blog post seeking to address fears that its technology will meddle with elections, as more than a third of the globe, including South Africa, prepares to head to the polls this year.
The use of AI to interfere with election integrity has been a concern since the Microsoft-backed company released two products: ChatGPT, which can mimic human writing convincingly, and Dall-E, whose technology can be used to create “deepfakes”, or realistic-looking images that are fabricated.
Those worried include OpenAI’s own CEO Sam Altman, who testified to US lawmakers in May that he was “nervous” about generative AI’s ability to compromise election integrity through “one-on-one interactive disinformation”.
The San Francisco-based company said that in the US, which will hold presidential elections this year, it is working with the National Association of Secretaries of State, an organisation that focuses on promoting effective democratic processes such as elections. ChatGPT will direct users to CanIVote.org when asked certain election-related questions, it added.
The company also said it is working on making it more obvious when images are AI-generated using Dall-E, and is planning to put a “cr” icon on images to indicate it was AI-generated, following a protocol created by the Coalition for Content Provenance and Authenticity.
It is also working on ways to identify Dall-E-generated content even after images have been modified.
In its blog post, OpenAI emphasised that its policies prohibit its technology from being used in ways it has identified as potentially abusive, such as creating chatbots pretending to be real people, or discouraging voting.
Read: Google opens access to Gemini in race to beat OpenAI
It also prohibits Dall-E from creating images of real people, including political candidates, it said. — Anna Tong, (c) 2024 Reuters