AI cannot deny health care coverage

The Centers for Medicare & Medicaid Services (CMS) made this clear in a statement to all Medicare Advantage insurers, saying that health insurance firms cannot utilize artificial intelligence or algorithms to decide care or refuse coverage to members on Medicare Advantage plans.

The letter, which is structured like a Medicare Advantage (MA) plan FAQ, was released shortly after patients filed lawsuits alleging that Humana and UnitedHealth have been denying care to senior people enrolled in MA plans due to a seriously faulty AI-powered tool. nH Predict, an AI tool utilized by both insurers and created by NaviHealth, a UnitedHealth subsidiary, is the subject of the complaints, which are requesting class-action status.

The complaints claim that following an acute accident, illness, or incident such as a fall or stroke, nH Predict generates harsh projections for how long a patient will require post-acute care in establishments such as skilled nursing homes and rehabilitation clinics. Furthermore, even though the estimations frequently don’t align with Medicare coverage guidelines or the advice of prescribing physicians, NaviHealth employees who deviate from them risk disciplinary action. For example, patients on UnitedHealth’s MA plan seldom stay in nursing homes for more than 14 days before receiving payment denials, according to the lawsuits. MA plans normally provide up to 100 days of covered care in a nursing home following a three-day hospital visit, using nH Predict.

Particular caution

Although the precise workings of nH Predict are unknown, it is said to base its forecasts on a database containing information on six million patients. Nevertheless, those acquainted with the program claim that it only takes into consideration a limited number of patient characteristics and does not provide a comprehensive analysis of each patient’s unique situation.

This is obviously not acceptable, as stated in the CMS memo. The CMS stated that an algorithm that decides coverage based on a larger data set rather than the specific patient’s medical history, the doctor’s recommendations, or clinical notes would not be compliant because insurers are required to base their decisions about coverage on the unique circumstances of each patient.

Next, the CMS wrote a hypothetical that matched the details described in the lawsuits:

An algorithm or software tool can help providers or MA plans forecast a likely length of stay in the case of deciding whether to terminate post-acute care services; nevertheless, that prediction cannot be the sole justification for doing so.

Instead, the CMS stated that in order for an insurer to terminate coverage, the particular patient’s condition must be reviewed, and the rejection must be based on coverage criteria that are publicly available on a non-password protected website. Furthermore, insurers who deny care must provide a thorough and detailed explanation of why the services are either no longer reasonable and necessary or no longer covered, including a summary of the applicable coverage criteria and guidelines.

Patients alleged in the cases that insurers failed to properly explain why their physician-recommended care was suddenly and incorrectly refused.

Fidelity

Overall, the CMS concludes that insurers may use AI techniques to assess coverage, but mostly just as a means of ensuring the insurer is abiding with the law. The CMS stated that fidelity should only be ensured with coverage requirements when using an algorithm or software tool. Furthermore, artificial intelligence cannot be utilized to apply secret coverage criteria or change the coverage criteria over time because the criteria are static and unchangeable when they are released publicly.

By issuing a general caution concerning algorithms and artificial intelligence, the CMS avoids any discussion about what constitutes artificial intelligence. The CMS stated that there are a lot of phrases that are used interchangeably when discussing quickly evolving software solutions.

Algorithms can include predictive algorithms (predicting the probability of a future admission, for example) and decisional flow charts consisting of a sequence of if-then statements (i.e., if the patient has a given diagnosis, they should be able to obtain a test). A machine-based system that can forecast, suggest, or make judgments that affect real or virtual surroundings for a certain set of human-defined objectives is called artificial intelligence. Artificial intelligence systems sense both real and virtual surroundings using machine and human-based inputs; they then automatically analyze and abstract these perceptions into models, which they then utilize to infer information or take action.

The CMS expressed open concern that using any of these technologies could amplify prejudice and discrimination, as has already happened with regard to racial bias. The CMS cautioned insurers to make sure that no AI tool or algorithm they employ introduces new biases or reinforces preexisting ones.

The letter concluded with the CMS informing insurers that it is stepping up its audit activities and will be closely observing whether MA plans are using and applying internal coverage criteria that are not found in Medicare legislation, even though the message’s overall content only clarified the existing MA requirements. Financial fines, corrective action plans, warning letters, and restrictions on enrollment and marketing are all possible outcomes of non-compliance.

Source link