Skip to main content
  • Recent AI Policy Developments – Can Lessons be Learned from Telehealth Policy?

    Center for Connected Health Policy

    Policymakers have typically been cautious about enacting extensive regulations around artificial intelligence (AI), but as AI becomes more common, meaningful policy changes have gradually been accelerating. CCHP is currently monitoring 94 pending policies at both the state and federal levels regarding AI and healthcare through its Telehealth Legislation and Regulation tracker. Most significant AI policy adoption has occurred at the state level thus far, and recent AI developments at the federal level continue to focus around a largely deregulatory approach to its use.
     
    Recent Federal AI Policy Developments

    As one of his first actions this term, on January 23, 2025, the President signed Executive Order (EO) 14179, Removing Barriers to American Leadership in Artificial Intelligence. The order seeks to revoke any existing policies that may limit American AI innovation, while also positioning the U.S. at the forefront of global AI leadership. In particular, the EO calls for the development of an Artificial Intelligence Action Plan across various federal agencies within 180 days of the order (by July 22, 2025), including identifying inconsistent policies that may be subject to revocation, as well as requiring the Office of Management and Budget (OMB) to revise particular prior administration procurement policies (OMB Memoranda M-24-10 and M-24-18) within 60 days of the order. In response, on April 7, 2025 the OMB released two new policy memos (M-25-21 and M-25-22) regarding federal agency use of AI and federal procurement. According to the fact sheet regarding the memos, they are meant to signal a fundamental shift toward pro-innovation and pro-competition policy, and away from more risk-averse approaches. The fact sheet also notes particular examples of how federal agencies are currently maximizing the benefits of AI, including the Department of Veterans Affairs (VA), which uses AI tools to optimize patient care, such as supporting the identification and analysis of pulmonary nodules during lung cancer screening exams to improve detection and life-saving diagnoses.
     
    The key points included in the two new memos are summarized as follows:
    • OMB Memorandum M-25-21, Accelerating Federal Use of AI through Innovation, Governance, and Public Trust
      • Rescinds and replaces OMB Memorandum M-24-10, Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence.
      • Directs agencies to:
        • Accelerate the Federal use of AI by focusing on three key priorities: innovation, governance, and public trust.
        • Remove unnecessary and bureaucratic requirements that inhibit innovation and responsible adoption, develop strategies that elevate AI adoption and innovation as a priority, while increasing transparency to the American public, civil society, and industry.
        • Invest in the American AI marketplace and maximize the use of US developed/produced AI products and services
        • Identify Chief AI Officers for each agency and OMB will convene an interagency council to maximize efficiencies and coordination
        • Implement minimum risk management practices for AI that could have significant impacts when deployed (high-impact AI) and prioritize safe, secure, and resilient AI.
    • OMB Memorandum M-25-22, Driving Efficient Acquisition of Artificial Intelligence in Government
      • Rescinds and replaces OMB Memorandum M-24-18, Advancing the Responsible Acquisition of Artificial Intelligence in Government.
      • Seeks to ensure a competitive American Al marketplace – Acquiring solutions at the lowest cost, accelerating adoption of AI while avoiding costly dependencies on a single vendor, as well as communicating clear vendor requirements.
      • Aims to safeguard taxpayer dollars by tracking AI performance and managing risk – ensuring AI systems are consistent with their stated purpose and deliver consistent results to preserve public trust.
      • Promotes effective AI acquisition with cross-functional engagement, robust collaboration across agencies.
    OMB Memorandum M-25-22 also notes that its guidance should be considered in concert with other more general federal policies that may also apply to AI. Additionally, it states that for guidance on regulatory and non-regulatory approaches to AI applications outside of the federal government, agencies should consult OMB Memorandum M-21-06, Guidance for Regulation of Artificial Intelligence Applications, which was released November 17, 2020. The 2020 guidance is largely consistent with the above themes regarding encouraging innovation and growth in AI and reducing unnecessary barriers to the development and deployment of AI. It also notes consideration of non-regulatory approaches, including promoting sector-specific frameworks and voluntary standards. For instance, as mentioned in a recent TechTarget article regarding AI, the healthcare industry is already creating frameworks to ensure responsible uses of AI through collaboratives such as the Coalition for Health AI (CHAI) and the Trustworthy & Responsible AI Network (TRAIN). In terms of regulatory approaches, the memo also mentions that “agencies may use their authority to address inconsistent, burdensome, and duplicative state laws that prevent the emergence of a national market.”
     
    Recent State AI Policy Developments

    As we see states adopting more AI policies, how those laws may interact with federal AI regulations will remain an important area to watch. We have seen a patchwork of inconsistent policy adoption across states and the federal government specific to telehealth over the years, which makes compliance and utilization of remote care increasingly more complicated. Nevertheless, states often have different interests and authorities that drive them to promote specific policy goals – such as improving access to care, protecting patient data, controlling costs, or mitigating risks related to new innovations – that may not always align with federal priorities. Some of the most common areas we have seen state AI policy focus around include states adopting their own AI advisory bodies, procurement related processes, as well as research and reporting policies. For instance, last year Indiana adopted SB 150 to create an artificial intelligence task force to study and assess use of AI technology by state agencies, while Maryland adopted SB 818, which requires state departments to conduct data inventories regarding artificial intelligence systems, as well as a subsequent report and recommendations, regarding the use of systems that employ artificial intelligence in health care delivery and human services. Another common policy found at the state level specific to healthcare is ensuring provider AI oversight and patient transparency related to AI uses. For example, California approved AB 3030 last year, which requires healthcare providers and facilities that use generative artificial intelligence to generate written or verbal patient communications to ensure that those communications include both a disclaimer that indicates to the patient that a communication was generated by generative artificial intelligence, as well as clear instructions describing how a patient may contact a human health care provider, employee, or other appropriate person. The bill would exempt from this requirement a communication read and reviewed by a human licensed or certified health care provider. Additionally, California adopted SB 1120, which requires health plans and insurers that use an artificial intelligence, algorithm, or other software tool for the purpose of utilization review or utilization management functions to ensure compliance with specified requirements, including that the artificial intelligence, algorithm, or other software tool bases its determination on specified information and is fairly and equitably applied. Arizona is also considering similar legislation, HB 2175, which would prohibit AI from being used by insurers to deny claims or prior authorizations for medical services and require a healthcare provider to review each claim or prior authorization request before issuing a denial.
     
    Inconsistent Policies and Lessons from Telehealth

    As mentioned previously, the confusion that inconsistent policies may create can even be evidenced within the aforementioned federal guidance. For instance, Executive Order (EO) 14179 references a different definition for AI (15 U.S.C. 9401(3)) than OMB Memorandum M-25-21 (Public Law 115-232 (238(g))). Meanwhile, OMB Memorandum M-25-22 references a different definition for “artificial intelligence system” (Public Law 117-263 (7223(4)) and OMB Memorandum M-25-21 creates policies specific to “high-impact AI.” AI is considered high-impact when its output serves as a principal basis for decisions or actions that have a legal, material, binding, or significant effect on rights or safety. Therefore, as part of conducting internal reviews of high impact use, the memo states that agencies should evaluate the AI’s specific output and its potential risks when assessing the applicability of the high-impact definition. A high-impact determination is possible whether there is or is not human oversight for the decision or action. While different definitions often serve different purposes, they can also generate confusion around what is captured and required in each specific instance. This is why CCHP closely tracks the different definitions of telehealth on its website, as jurisdictions often create definitions specific to Medicare/Medicaid and private payers, as well as differing definitions for telehealth specific to provider professional requirements. While this may be common practice – different policy definitions within and across jurisdictions – it also may be an opportunity for policymakers to learn from the path telehealth policy has taken to address potential confusion at the forefront of policy creation, prior to adopting additional AI policies. Oregon for example, enacted HB 4153 last year to establish a task force on artificial intelligence that is required to examine and identify terms and definitions related to AI that may be used for legislation, beginning with examining terms and definitions used by federal agencies. Washington adopted SB 5838, which also creates an AI task force to assess current uses and trends, as well as benefits and risks, and make recommendations regarding AI legislation.
     
    Public policy always has to walk a fine line between promoting technological innovation and protecting consumers, especially in healthcare, where both the care provided and policies implemented should remain as patient-centered as possible. Additionally, clear regulatory guidance across jurisdictions will better ensure policy compliance, though variations are often inevitable and reflective of different jurisdictional policy priorities. Therefore, as AI policy continues to progress, the availability of accurate resources and educational information regarding AI policy remains of utmost importance.
     
    For more information regarding the recent federal AI policy developments, please review Executive Order (EO) 14179, OMB Memorandum M-25-21, and OMB Memorandum M-25-22 in their entirety. For more information on pending AI healthcare policy across jurisdictions, please access CCHP’s Telehealth Legislation and Regulation tracker.
     
    Additional AI resources include:
    Know more from the original resource at : https://mailchi.mp/cchpca/recent-ai-policy-developments-can-lessons-be-learned-from-telehealth-policy

By using this site, you agree to the Privacy Policy and acknowledge the use of cookies to store information, which may be essential to making our site work properly or enhancing user experience.