A 65-year-old former Commonwealth Bank of Australia (CBA) employee says she was replaced by the very AI chatbot she helped train.
Kathryn Sullivan, who worked as a teller for 25 years, was made redundant in July after weeks spent scripting and testing responses for the bank’s “Bumblebee” AI.
“I was completely shell-shocked, alongside my colleague,” Sullivan told The Sun. “We just feel like we were nothing, we were a number.”
She said she supports AI but believes safeguards are needed. “While I embrace the use of AI and I can see a purpose for it in the workplace and outside, I believe there needs to be some sort of regulation to prevent copyright infringements or replacing humans,” she added.
CBA initially ignored her inquiries but later admitted its AI rollout was premature. A spokesperson said:
“The bank’s initial assessment that 45 roles were not required did not adequately consider all relevant business considerations, and because of this error, the roles were not redundant. We have apologised to the employees concerned and acknowledge we should have been more thorough.”
The bank offered Sullivan her role back, but she declined, citing job insecurity.
Meanwhile, CBA continues to push forward with AI. CEO Matt Comyn recently announced a partnership with OpenAI to tackle scams, fraud, and cybercrime. “Our strategic partnership with OpenAI reflects our commitment to bringing world-class capabilities to Australia, and exploring how AI can enhance customer experiences, better protect our customers, and unlock new opportunities for Australian businesses,” he said.
The case has reignited debate on AI ethics, job security, and the risk of employees training systems that later replace them. Similar concerns are growing worldwide, as banks and firms adopt AI for customer service, fraud detection, and back-office tasks.