The House Task Force on Artificial Intelligence (AI) released a comprehensive 253-page report providing a roadmap for Congress as it develops policies to regulate and optimize the use of this rapidly advancing technology. The bipartisan task force, cochaired by Reps. Jay Obernolte (R-CA) and Ted Lieu (D-CA), consulted over 100 experts across various fields to produce recommendations spanning 14 key issue areas, including health care.
The report emphasizes the dual objectives of fostering U.S. leadership while establishing guardrails to protect against identified and emerging risks. It details the transformative potential of AI in health care, alongside the complexities and challenges of ensuring responsible adoption. Task force leaders call for transparency, regulation and continued bipartisan collaboration to address these challenges and maintain public trust. The report serves as a foundation for future legislation and rulemaking, offering guiding principles and actionable recommendations to balance innovation with security and fairness.
AI Adoption in the Health Care System
The task force highlights the role AI is playing in health care, such as improving drug development and clinical decision-making. However, the current state of AI is marked by challenges including regulatory uncertainty, reimbursement issues, limited understanding of AI, data privacy concerns and hesitancy over AI’s role in insurance decisions and coverage.
Drug and Medical Device Development
The report finds that AI is already being used in drug development, where it can expedite the discovery, design and testing of drug candidates, and ultimately decrease the time and cost required to get a drug to the market. For example, it currently takes an average of 12 years and between $314 million and $2.8 billion to transition a drug from preclinical testing to approval by the Food and Drug Administration (FDA). The report indicates that the potential efficiency through AI could reduce the price of drugs, speed up market entry and make it more cost-effective to invest in producing orphan drugs and drugs to treat rare diseases. Furthermore, the task force found that researchers are using machine learning (ML) and generative AI (GenAI) throughout the first three phases of drug development, which can assist researchers by increasing the probability of a successful clinical trial by adjusting the number of design variables. GenAI could also help develop faster, more efficient and cheaper trials by streamlining design and identifying eligible patients.
Additionally, AI is changing the field of medical devices and the research and development of innovative therapies. The report finds that AI could help enhance medical devices and software by improving functionality, efficiency and accuracy for the health care system and patient outcomes. Through such improvements, providers could better personalize treatments, enhance surgical precision and deliver better care. However, the report highlights that there is a limited understanding of how AI makes its decisions, as well as a lack of high-quality data for training AI models. The authors also highlight the need for more researchers trained in AI and the continuous uncertainty around the legality and limitations of using AI within the medical field. Further, the report notes concerns related to data privacy and security in AI-supported clinical trials.
Clinical Settings
The report details how AI is transforming clinical settings with applications that enhance diagnostics, support clinical decision-making and streamline administrative tasks. In diagnostics, AI shows immense promise in decreasing errors, which accounts for patient safety incidents and costs the United States over $100 billion annually. For example, ML models already assist in imaging analysis for conditions like cancer, heart disease and Alzheimer’s disease. These tools help identify areas of concern, enabling earlier disease detection and consistent analysis across diverse settings.
Already, GenAI improves medical imaging by removing noise from scans, such as MRIs, to create clearer images without requiring additional radiation exposure. The report finds that while this innovation holds promise, it is not perfect and still needs human oversight. AI is also advancing clinical decision-making by analyzing patient data to predict health outcomes and recommend treatments. Clinical decision support systems (CDSS) further augment patient care by analyzing historical case data and medical literature to suggest treatment options. While research indicates these tools can reduce disparities if implemented carefully, improperly trained models can lead to inequities.
In administrative applications, the report concludes that AI can alleviate provider burnout by reducing the burdens of documentation. GenAI can transcribe interactions between patients and providers and generate detailed notes to improve the quality of electronic health records (EHRs), allowing providers to dedicate more time to patient care. AI can then extract clinical concepts from provider-patient conversations to create accurate, actionable records; however, poorly implemented AI systems could worsen documentation quality, so systems must be evaluated.
Health Insurance Decisions
The task force emphasizes that payers of health care services play a critical role in the coverage of AI services and devices as well as the use of AI tools within the health care industry. However, the report highlights that questions remain unanswered as to whether current Centers for Medicare and Medicaid Services (CMS) policies will be suitable for all AI technology in health care. CMS’s current AI coverage framework focuses on FDA-approved software that helps clinicians make decisions through predictive modeling or algorithms. CMS allows for limited Medicare coverage of AI technology in cases where it meets Medicare’s coverage criteria. As a result, payment for AI reimbursement and implementation in health care services remains unclear.
The report also identifies potential issues with health insurers implementing AI tools for insurance decisions, including a lack of transparency. For example, when AI-driven denials of elderly patients’ claims for extended care have been appealed to federal administrative law judges, around 90% were reversed. The task force found that although CMS adopted a final rule around using AI in making determinations for Medicare Advantage (MA), the concerns have stretched across other health coverage programs. Although the Medicaid and Children’s Health Insurance Program (CHIP) Payment and Access Commission (MACPAC) announced it would examine the extent to which managed care organizations (MCOs) use AI to automate parts of the prior authorization process, concerns that the use of AI as a medical management tool could create unnecessary denials and lack of access to necessary treatment when inaccurate or biased results are produced still remain.
Policy Challenges
The evolving integration of AI has introduced key policy challenges, including issues with data quality, inaccurate responses, decision-making transparency, privacy and cybersecurity, system interoperability, liability for AI errors, biased decisions and the prioritization of financial gain over patient care and safety.
Data Management – The task force highlights that health care data, including EHRs, has created a rich ecosystem, but the lack of standardized formats, insufficient data quality and integration across systems limits its full use. Without the ability to merge data from different EHRs, AI models may be trained on small, nonrepresentative populations, reducing their applicability to broader groups. Additionally, the way federal health agencies organize and share their biomedical data will be important for AI innovation. The report also emphasizes that the information removed to deidentify data, such as gender or race, may be information necessary to train AI models. Such additional information may be necessary to ensure there are no biases or flawed characteristics of the training data, which could lead to inequitable outcomes, though it risks patient privacy.
Health Information Technology – The report identifies significant challenges with AI related to data privacy and interoperability in health information technology policy. AI relies on large datasets, often containing sensitive patient information from EHRs. This extensive data use increases risks to privacy and security, raising concerns about who can access, control and use this information. Data breaches in health care are frequent, stemming from both cyberattacks and internal leaks, and can lead to identity theft, medical fraud and other misuse. While laws like the Health Insurance Portability and Accountability Act (HIPAA) provide frameworks to protect health data through privacy and security rules, these regulations have limitations. Emerging AI systems, especially generative tools that process patient interactions, may pose additional risks if data is shared with external companies.
AI tools must be compatible with existing systems, particularly EHRs, to be effectively integrated into health care. However, health care systems can use different EHR systems, making data sharing and integration difficult. Poor interoperability can hinder the adoption of AI tools, as they may not connect seamlessly across different platforms. Additionally, the concentration of the EHR market among a few vendors poses challenges. The report finds that broader cooperation and standardization are necessary to fully leverage AI’s potential in health care.
Transparency – The report emphasizes the critical need for transparency in AI processes and the need to make decisions, which build trust between health care professionals and their patients. If these processes are not explained or well understood, decisions for diagnosis or treatment recommendations influenced by AI may be undermined. Barriers to transparency include developers withholding their models due to IP concerns as well as the complexity of the technology. If providers do not understand the technologies thoroughly, misuse may occur. The report notes transparency is essential to evaluate AI tools’ safety, efficacy and fitness for patient needs.
Bias – The report emphasizes the significant risks associated with bias in AI systems, which occurs when AI algorithms produce inaccurate results, often due to issues in the training data. For example, underrepresenting or overrepresenting certain populations in datasets can lead to systemic inaccuracies. Such biases can result in inappropriate care, poorer health outcomes and diminished trust in health care technologies. Bias can also originate from human influences, whether intentional or unintentional, during the design and deployment of AI systems. In health care, these challenges are particularly critical, as flawed assumptions about patient populations can have life-threatening consequences. The report calls for standards and rigorous evaluations to detect and mitigate bias in AI outputs, ensuring equitable and reliable uses of AI.
Liability – The task force finds a lack of legal and ethical clarity on accountability when AI generates incorrect diagnoses or harmful recommendations. As more parties are involved in developing and deploying AI systems, determining liability becomes more complex. Additionally, the scope of liability policies may influence clinical decision-making. The task force also emphasizes that the final rule addressing AI liability under Section 1557 of the Affordable Care Act (ACA) places the responsibility for some AI action on health care providers rather than AI developers by making providers understand the foundation of tools and when they could contribute to discrimination.
Recommendations
The report concludes that AI has the potential to enhance health care by reducing administrative burdens, accelerating drug development and improving clinical diagnoses, ultimately leading to greater efficiency, better patient care and improved outcomes. However, the absence of uniform standards for medical data and algorithms hinders interoperability and data sharing, posing a significant barrier to the widespread adoption and integration of AI tools in health care systems.
The report outlines the following recommendations for policymakers:
- Encourage collaboration, high-quality data access and oversight mechanisms to ensure AI technologies in health care are safe, transparent and effective.
- Invest in strategic research and development through institutions like the National Institutes of Health (NIH) to advance AI applications and maintain leadership in health care innovation.
- Establish guidelines for AI privacy, security, interoperability and post-deployment monitoring to ensure responsible use and lessen risks.
- Adapt Medicare reimbursement policies to account for the timesaving benefits of AI technologies without stifling innovation.
- Address gaps in legal and ethical frameworks to clarify responsibility and protect patients when AI tools produce incorrect or harmful outcomes.
On the Horizon
The report serves as a starting point for ongoing discussions around AI policy in Congress. It highlights key opportunities and challenges in the AI policy landscape, offering a framework for evaluating policy proposals and guiding future legislative action, and emphasizing the need for continued research and debate. With a Republican trifecta and unified government, the next Congress has significant potential to influence AI policy. Key figures like Rep. Jay Obernolte (R-CA) are expected to lead the conversation alongside House Speaker Mike Johnson (R-LA) and Majority Leader Steve Scalise (R-LA). In the Senate, Sen. Ted Cruz (R-TX) is poised to become the next chairman of the Senate Commerce, Science and Transportation Committee. He has signaled that AI will be a priority for the committee but has also been a critic of unified AI regulation. Sen. Bill Cassidy (R-LA), who will lead the Senate Health, Education, Labor, and Pensions Committee, also previously released a white paper on AI and health care. While there has been some interest in issues like health data privacy, the Senate may wait for presidential guidance before pursuing AI regulation.
President-elect Donald Trump has committed to reversing President Biden’s executive order (EO) on AI in favor of a light-touch regulatory approach. Despite this pledge, the Department of Health and Human Services (HHS) has stated that it has developed an AI strategy independently of the EO that reflects longstanding efforts from agencies like the NIH and FDA. Outgoing Assistant Secretary for Technology Policy Micky Tripathi has said that the new administration may approach regulation differently, but HHS’ focus is on safety, innovation and trust, regardless of political changes.
The incoming Trump administration will need to navigate the evolving AI regulatory landscape as the technology continues to advance. Trump could still retain and build on certain aspects of the EO, particularly those related to cybersecurity and national security, while continuing to favor pro-innovation AI policies. However, differing views within Trump’s inner circle suggest ongoing debates about how best to approach AI policy. Elon Musk, who has openly opposed licensing requirements and expressed support for safeguards on AI development, is expected to play a significant role in shaping Trump’s AI policy. Vice President-elect JD Vance will also influence the administration’s AI policy, though his views differ somewhat from Musk’s. Vance prioritizes innovation and has expressed concern that overregulation could make it harder for new entrants to innovate and power the next generation of growth. This is a similar argument to that made by Trump-appointed AI and crypto czar David Sacks, who has expressed support for a freer ecosystem empowering AI companies to grow.
THIS DOCUMENT IS INTENDED TO PROVIDE YOU WITH GENERAL INFORMATION REGARDING the health care context of the AI task force's report. THE CONTENTS OF THIS DOCUMENT ARE NOT INTENDED TO PROVIDE SPECIFIC LEGAL ADVICE. IF YOU HAVE ANY QUESTIONS ABOUT THE CONTENTS OF THIS DOCUMENT OR IF YOU NEED LEGAL ADVICE AS TO AN ISSUE, PLEASE CONTACT THE ATTORNEYS LISTED OR YOUR REGULAR BROWNSTEIN HYATT FARBER SCHRECK, LLP ATTORNEY. THIS COMMUNICATION MAY BE CONSIDERED ADVERTISING IN SOME JURISDICTIONS.