Researchers at Brigham and Women's Hospital – a teaching hospital of Harvard Medical School in Boston, Massachusetts – found that cancer treatment plans generated by OpenAI's revolutionary chatbot were full of errors.
According to the study, which was published in the journal JAMA Oncology and initially reported by Bloomberg – when asked to generate treatment plans for a variety of cancer cases, one-third of the large language model's responses contained incorrect information.
The study also noted that the chatbot had a tendency to mix correct and incorrect information together, in a way that made it difficult to identify what was accurate. Out of a total of 104 queries, around 98% of ChatGPT's responses included at least one treatment recommendation that met the National Comprehensive Cancer Network guidelines, the report said.
The authors were "struck by the degree to which incorrect information was mixed in with correct information, which made it especially difficult to detect errors – even for experts," coauthor Dr. Danielle Bitterman told Insider.
ChatGPT产生的癌症治疗方案错误百出
版主: verdelite, TheMatrix