HomeTech & SciencePhysicians Grapple with A.I. in Medical Practice, Highlighting Inadequate Regulations

Related Posts

Physicians Grapple with A.I. in Medical Practice, Highlighting Inadequate Regulations

In the field of medicine, there are cautionary tales about the unintended consequences of artificial intelligence (AI) that have become legendary. For example, there was a program designed to predict sepsis, a deadly blood infection, which led to numerous false alarms. Another program intended to improve follow-up care for the sickest patients actually deepened health disparities.

As a result of these flaws, doctors have been hesitant to fully embrace AI and have largely kept it on the sidelines, using it as a scribe, casual second opinion, or back-office organizer. However, there has been a growing investment and momentum in the use of AI in medicine.

Within the Food and Drug Administration (FDA), which plays a key role in approving new medical products, AI has become a hot topic. It is being used to discover new drugs, identify unexpected side effects, and even assist overwhelmed staff with repetitive tasks. However, the FDA has faced sharp criticism over how it vets and describes the AI programs it approves for detecting various medical conditions.

Physicians and experts are calling for increased scrutiny and regulation of AI, and there have been discussions in the White House and Congress about the need for oversight. The lack of a single governing agency for AI in medicine complicates the situation. Senator Chuck Schumer has even summoned tech executives to Capitol Hill to discuss the future of AI and its potential pitfalls.

The FDA’s current approach to overseeing AI programs is seen as outdated and insufficient. Developers are not required to disclose how their programs were built or tested, which leaves doctors with many unanswered questions. The lack of transparency and information is causing doctors to be wary of using AI, as they fear it may lead to unnecessary procedures, higher costs, and potentially harmful treatments.

Dr. Eric Topol, an expert in AI and medicine, believes that the FDA has allowed shortcuts and needs to require more rigorous studies to assess the benefits and risks of AI programs. Large-scale studies have already revealed some of the flaws and limitations of AI in medicine, such as false positives and missed diagnoses.

The FDA’s reach is limited to products that are approved for sale, and it has no authority over internal AI programs developed by health systems, insurers, or other organizations. This lack of oversight raises concerns about the potential impact of AI on patient care and coverage decisions.

Efforts are being made to address these issues, such as building labs where developers can access vast amounts of data and test AI programs. However, there are still challenges to overcome, including interoperability issues between different software systems and disagreements over who should pay for AI technology.

Despite the challenges, some success stories have emerged, demonstrating the potential of AI in medicine. For example, an AI program helped detect a brain clot in a stroke patient, leading to immediate treatment and a successful outcome. However, these success stories are not yet widespread, and more research and regulation are needed to ensure the safe and effective use of AI in medicine.

Latest Posts