Researchers and doctors are banking on artificial intelligence and machines to help them diagnose patients hoping that the use of technology in their practices might help speed up the process of diagnosis and treatment. The use of AI (Artificial Intelligence) could also help them pick up on processes and patterns that aren’t apparent to the human eye or brain. The field of psychiatry usually requires conversations with the patients in or order to devise a treatment plan. It has the potential to augment care.
Peter Foltz, a research professor at the University of Colorado’s Institute of Cognitive Science said, “We’re working on how to analyze patient responses. Currently in mental health, patients get very little interaction time with clinicians. A lot of them are remote and it is hard to get time with them.” Foltz and his team are working on creating applications that can collect data about a patient’s mental state and report it to the clinicians. He further explained that these applications are not meant to replace doctors and psychiatrists but to help them improve the quality of care and assessment.
As research on this particular subject continues, the psychiatric community needs to have trust in AI and its abilities. “In order to really be able to do this, there needs to be a greater understanding from laypeople and the psychiatric community on what artificial intelligence can do, what it can’t do, and how to evaluate it,” says Foltz.
Foltz and his colleagues have outlined a framework in hopes to establish trust. The three goals for artificial intelligence to survive are explain-ability, transparency, and generalizability. The framework highlights these key points.
AI programs are trained with a specific set of data with a known diagnosis. However, these program outcomes are often limited by specific population training.