Artificial intelligence carries a major health care promise, and Trump’s work plan is a good first step
Doctors realize the transformative capabilities of the IQ Agency to improve the provision of care and the results of patients-and to realize that the promise will require innovation led by the doctor and studied decisions today to create the future for the medicine we all want.
The impact of health care technology depends on the reliability of its basic systems, whether it is algorithms, devices or data, and the extent of their thinking, publishing and integrating them. Doctors have to adopt innovation in health care, he must fulfill four main criteria:
- Effectiveness can be clarified in clinical settings in the real world;
- Show a clear value for patients, doctors and the wider health system;
- Ensuring clear and appropriate responsibility frameworks;
- Merging smoothly into the clinical workflow.
AI’s Trump Management Management Plan, which was released last week, is an exciting and welcome development. It proves close attention to building general and professional confidence of artificial intelligence technology through transparent and moral excess and accelerating national safety, performance and inter -employment standards.
In the American Medical Association, we also believe in the importance of the wide federal organizational approach coordinated for the design and integration of artificial intelligence, and a commitment to lift the workforce in the doctor. However, to ensure artificial intelligence in health care reach its full potential, the representation of the strong doctor should be at each stage.
Our surveys on the feelings of doctors on augmented intelligence in health care show a growing enthusiasm for technology applications in the clinical world, especially in its capabilities to simplify the workflow and create the efficiency of more practice. However, two out of five doctors are equally excited and anxious about artificial intelligence health.
Confidence -building really requires – by patients and doctors – additional considerations that enhance the impact of the action plan. Specifically, four main areas of opportunities include: 1) Doctors must be full partners at each stage of the life of artificial intelligence cycle; 2) The coordinated and transparent government approach coordinator is necessary; 3) Bias -free insurance will enhance confidence; 4) The frameworks that clarify the responsibility appropriately.
The involvement of doctors at each stage of the life of artificial intelligence cycle means that doctors are full partners in design, development, governance, manufacture of rules, post -market monitoring and clinical integration. Doctors experts are uniquely qualified to judge whether the artificial intelligence tool is valid, fit the standard of care and supports the patient’s relationship and the doctor.
AI’s health care is unique in that its risk of health and well -being is high. Providing clarity and consistency for developers, publishers and final users-patients and doctors-is necessary, and this will only come from a complete government approach that includes states. Federal and United States policy makers must work to avoid fragmentation that would suffocate innovation. This approach will give priority to safety, accountability and public confidence in artificial intelligence systems.
Confidence in artificial intelligence begins with confidence in how to use data. Doctors and patients need assurances that the data operating tools that work on artificial intelligence are safe, identified, free from bias, and are governed by strong approval frameworks. We need comprehensive protection of privacy and governance structures that ensure patients understand how their data is used – and they can control them. In addition, bias in artificial intelligence can cause a real harm to the patient. The elimination of signals to wrong information or diversity, fairness and integration into risk frameworks may be hindered by our ability to address these issues. Efforts to alleviate the bias should remain the cornerstone of any ethical strategy of Amnesty International.
Finally, concerns about artificial intelligence are a higher issue for doctors. For more confidence and advanced accreditation building, it will be necessary to ensure that there are frameworks that protect doctors and clarify responsibility appropriately for artificial intelligence errors and performance issues.
Artificial intelligence is not just the future of health care, it is largely present. The new AI’s AI’s plan of action is a forward step to bring some of these issues to the forefront while artificial intelligence health technology is still in its cradle. As doctors, we have a chance, responsibility, to work today to ensure the transformation of artificial intelligence for health care and not just the automation of incompetence. We are excited to build this momentum and work together to create a future as innovation enhances each patient meeting and increases the care of each doctor.
John White, MD, MPH, is the CEO and CEO of the American Medical Association. Margaret Luzovatsky, PhD in Medicine, is the chief medical information official in Ama and Vice -President of Digital Health Innovations.