A psychosocial safety training company that used the full name of a victim in a training session at his former job says artificial intelligence (AI) is to blame.
Psychosocial Leadership teacher Charlotte Ingham said she used Microsoft’s Copilot chatbot to create models of the psychological risks staff might face at Bunbury Prison, where she teaches.
Another example was a person named Bronwyn Hendry, the name of someone who was a real worker.
«I went into it thinking I had a fictional experience,» Ms Ingham said.
«When I put up the slide to do this, someone in the room said, ‘This is not fiction, it’s real’.»

Staff at Bunbury Prison recently took part in a grueling training session on social media. (ABC South West: Georgia Hargreaves)
Ms Hendry is the plaintiff in a Federal Court case against the Department of Justice and a number of staff at Bunbury Prison for sexual abuse and bullying.
«I don’t know [the chatbot] they use real people’s names,» Mr Ingham said.
«How would I know?»
Ms Ingham said she would not be able to contact the chatbots to provide feedback, which Microsoft confirmed would be the case.
However, the ABC was able to independently prove that the chatbot can provide real names and details in case studies.
When the ABC asked for a «fictional case study» about abuse at a WA prison, Copilot provided an example with the full name of Ms Hendry and the prison warden, as well as the actual details of the Federal court case.

A video of a conversation between an ABC reporter and a Copilot shows how they use real names and details even when asked to do a fictitious investigation. (Issued: Copilot)
It also said, «this research is speculative, but it is based on real events».
A Microsoft spokesperson said Copilot «may include names and events found in searches…
Victim calls studies ‘contradictory’
Ms Hendry said using her experience on a course offered by the Ministry of Justice in her previous workplace felt «controversial».
«You have to remember that I am fighting tooth and nail to prove what happened to me in the Supreme Court,» he said.
«It’s very provocative.»

Former prison officer Bronwyn Hendry’s name was used to train staff at her former facility. (Submitted by: Bronwyn Hendry)
The Ministry of Justice said that although it provided the training, all the equipment provided for the training was prepared and had a teacher.
It said it was not aware that Ms Hendry’s name would be used, but that there was little public information available about her.
«The department is disappointed by the incident and is taking steps to ensure that education does not continue in this manner,» the spokesperson said.
Mrs Hendry said this was not good enough.
«At the end of the day, it’s the responsibility of the Department of Justice,» he said.
«They bought him. They paid him to help him. They should have done checks and balances.»

WorkSafe is investigating allegations of bullying and harassment among staff at Bunbury prison. (ABC News: Amelia Season)
The incident comes amid an ongoing WorkSafe investigation into allegations of bullying and harassment among staff at Bunbury Prison.
The warden issued a notice to the prison last year encouraging senior staff to learn more about workplace safety.
An AI expert is warning companies to tread carefully
The director of the Center for AI and Digital Ethics at Melbourne University said this raised questions about the best use of AI chatbots in the workplace.
Professor Jeannie Paterson said the main problem was «overreaction», where chatbots spit out real information as opposed to what has been released.
He added that the results produced by the ABC interaction were particularly interesting because the chatbot proved that the originator of the study was «fictional».

Jeannie Paterson says «regurgitation» is what causes chatbots to use real people’s names in «fictional» situations. (Submitted by: Jeannie Paterson)
«In one sense, we can say that the person who is promoting it is misled,» said Professor Paterson.
«Except that one of the things we know when we use reproductive AI is that it’s disruptive…it can’t be trusted.»
He said that it could happen if the information was specific or there was no information on the topic.
«That’s why I would say companies shouldn’t say, ‘Don’t use it’. Companies should say, ‘Here’s our usage policy’,» he said.
«And the application process would be, don’t put the information that is very difficult and search for names.»
#warning #reveals #trusted #chats #realtime #fantasy #data
