Healthcare must establish safeguards around AI for transparency and security – TechToday


Four out of 10 patients see bias in their doctors, according to a The MITER-Harris survey of patient experiences. In addition to patients being more sensitive to provider bias, the use of AI tools and machine learning systems have also been shown to bias discrimination.

Accordingly, a A recent survey found that 60% of Americans would not be comfortable with agents relying on AI for their health care. But between provider shortages, reimbursements and increased patient demand, over time providers may have no choice but to turn to AI tools.

Healthcare IT News sat down with Jean-Claude Saghbini, AI expert and chief technology officer at Lumeris, a technology and value services company, to discuss AI concerns in healthcare – and what healthcare leaders and clinicians can do about them. .

Q. How can healthcare organization CIOs and other healthcare IT leaders address the perceived bias in artificial intelligence as the popularity of AI systems increases?

A. When we talk about AI we often use words like “learning” and “machine learning.” This is because AI models are trained primarily on human-generated data and thus learn our human biases. These biases are particularly problematic in AI and are particularly relevant in healthcare where the patient’s health is at stake and where their presence will continue to spread medical inequities.

To address this, healthcare IT leaders need to better understand the types of AI that are embedded in the solutions they are adopting. Perhaps most importantly, before implementing new AI technologies, leaders must ensure that the vendors providing these solutions appreciate the harm that AI can bring and have designed their models and tools accordingly to prevent it.

This can range from ensuring that upstream learning is unbiased and diverse, or using adaptive processes to output to compensate for inconsistencies in learning.

At Lumeris, for example, we are taking a number of steps to combat discrimination in AI. First, we are proactively learning and changing the health inequities represented in underground data as part of our commitment to equity and justice in health care. This method involves analyzing clinical practice data to obtain a population and adjusting our sample to ensure that it does not affect specific groups.

Second, we are training our models on different types of data to ensure they are representative of the population we serve. This includes the use of integrated data sets that represent patient populations, health status and care.

Finally, we include non-specific health variables in our models as well as health variables, thereby ensuring that predictors and risk factors are influenced by patients’ socioeconomic and socioeconomic status. For example, two patients with very similar medical conditions can be directed to different programs to have better results when we combine SDOH data in AI models.

We are also taking a transparent approach to developing and deploying our AI models, incorporating feedback from users and using public oversight to ensure that our AI concepts are consistent with best clinical practice.

Combating perceived bias in AI requires a comprehensive approach that takes into account the entire development of AI and cannot be arbitrary. This is the key to promoting fairness and justice in healthcare AI.

Q. How do health systems get between patients who don’t want their doctors to rely on AI and overwhelmed doctors who are looking to automation for support?

A. First let’s discuss two points. The number 1 point is that between waking up in the morning and seeing each other on the way to the office, chances are patient and the doctor has already used AI many times like asking Alexa about the weather, trust. a Nest device to control the temperature, Google maps for the right directions, and so on. AI is already supporting many aspects of our lives and has become inevitable.

Point number 2 is that we are moving toward scarcity 10 million doctors worldwide by 2030, according to the World Health Organization. Using AI to improve the skills of physicians and reduce the severity of this shortage is no longer an option.

I understand that patients are concerned, and rightfully so. But I encourage you to consider using AI in patient care, as opposed to patients being “assisted” by AI tools, which I believe is what many people are worried about.

This has become very popular recently, but the truth is that AI engines are not replacing doctors anytime soon, and with new technologies such as generative AI, we have the opportunity to provide important opportunities for the well-being of patients and doctors. People’s expertise and experience remain critical to health.

Finding the balance between patients who don’t want to be helped by AI and overburdened doctors looking to AI systems to help them is a difficult issue. Patients may worry that their care is being handed over to machines, while doctors may feel overwhelmed by the amount of data they need to review to make the right decision.

The key is education. Most headlines in the news and on the internet are designed to get clicks. By avoiding these misleading articles and focusing on real-life situations and using AI in clinical practice, patients can see how AI can improve physician knowledge, accelerate access to information, and identify patterns that are hidden in data and that could easily be missed by the best doctors.

Furthermore, looking at facts, not topics, we can also explain that this tool (and AI is just a tool), if it is well integrated into the workflow, can increase the ability of the doctor to provide the right care and keep the doctor in the driver’s seat. in terms of interaction and responsibility for the patient. AI is and can continue to be a valuable tool in healthcare, providing doctors with information and recommendations to improve patient outcomes and reduce costs.

I personally believe that the best form of communication between patients and doctors AI needs is to ensure that AI is used as a tool to support clinical decision-making rather than relying on human expertise.

Lumeris technology, for example, powered by AI and other technologies, is designed to provide doctors with insightful information and actionable recommendations that they can use to guide their care decisions and empower them to make the final call.

In addition, we believe it is important to include patients in discussions about the development and deployment of AI systems, to ensure that their concerns and preferences are taken into account. Patients may be more willing to accept the use of AI if they understand the benefits it can bring to their care.

In the end, it is important to remember that AI is not a silver bullet in the field of medicine, but a tool that can help doctors make better decisions and develop and improve medical procedures, especially with some of the new innovations such as GPT, for example.

By ensuring that AI is used effectively and rationally, and involving patients in the process, healthcare organizations can strike a balance between the interests of patients and the needs of overburdened physicians.

Q. What should providers and healthcare providers be aware of as more and more AI technologies become available?

A. The use of AI in health IT is gaining a lot of attention and is a major investment category, according to the latest AI Index Report published by Stanford, but we have a problem as leaders in health.

The excitement associated with the possibilities encourages us to move faster, but the new and sometimes dark nature of technology is raising the alarm and encouraging us to slow down and play it safe. Success depends on our ability to strike a balance between accelerating adoption and adopting new AI-based capabilities while ensuring safe and secure implementation.

AI relies on high-quality data to provide accurate insights and recommendations. Healthcare providers must ensure that the data used to train AIs is complete, accurate and representative of the patients they serve.

They must also be vigilant in monitoring the trends and integrity of their data to ensure that AI is providing accurate and up-to-date information. This also applies to the use of large pre-taught languages, where the goal of quality and integrity remains the same even when the authentication method is new.

As I’ve said, bias in AI can have far-reaching implications for healthcare, including increasing health disparities and reducing the power of clinical decision-making. Aid organizations should be wary of AI models that do not compensate for bias.

As AI becomes more and more prevalent in healthcare, it is important for healthcare organizations to be transparent about how they are using AI. Additionally, they need to ensure that there is oversight and public accountability for the use of AI in patient care to prevent mistakes or errors from going undetected.

AI raises many health concerns, including questions about privacy, data ownership and informed consent. Aid organizations must keep these ethical principles in mind and ensure that their use of AI, both directly and indirectly through vendors, is consistent with their ethical principles and values.

AI is here to stay and change, in healthcare and beyond, especially with the new and exciting advances in AI output and large forms of language. It’s impossible to stop this evolution – and it’s unwise to do so because after decades of adopting technology in the healthcare industry, we still haven’t come up with solutions that reduce the complexity of healthcare while providing quality care.

In fact, many technologies have added new functions and additional functions to agents. With AI, and especially the advent of artificial AI, we see huge opportunities to advance towards this challenging goal.

However, for the reasons I have mentioned, we need to implement measures to protect transparency, impartiality and security. What is interesting is that, if well thought out, it is these safeguards that will ensure that the process of improving adoption will protect us from the failures that will lead to further evolution based on AI adoption and use.

Follow Bill’s HIT news on LinkedIn: Bill Siwicki
Email him: bsiwicki@himss.org
Healthcare IT News is a publication of HIMSS Media.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *