As technological development continues to accelerate, industries face unprecedented opportunities and challenges. This dynamic shift is particularly pronounced in healthcare, where Information Technology (IT) and Cybersecurity teams must navigate a rapidly evolving landscape.
For CareFirst BlueCross BlueShield (CareFirst), one of the largest not-for-profit healthcare companies in the nation, these changes present significant benefits and unique hurdles that we must address to ensure the organization remains competitive, efficient, and, most of all, secure.
At the helm is Dori Henderson, CareFirst's Senior Vice President, Chief Digital Information Officer, who joined CareFirst with a 25-year background in aerospace and defense.
"The advent of generative artificial intelligence excites me with its potential to enhance productivity and efficiency and ultimately contribute to business growth and stakeholder value," says Dori.
One of the key opportunities presented by modern technology is the ability to automate tasks that were once time-consuming and labor-intensive. IT teams can integrate artificial intelligence (AI) tools to streamline operations across various departments, from claims processing to member services.
By automating repetitive processes, CareFirst can reduce human error, lower operational costs and improve customer satisfaction. For example, AI has helped us increase our inbound correspondence handling by 400% while increasing the accuracy of our handling by 96%. This means that what used to take six days to get into the hands of the right person now takes a few hours, improving satisfaction among members, providers and internal staff. AI can also assist in predictive analytics, enabling CareFirst to forecast trends in member health and plan use, allowing for more proactive interventions.
Despite her excitement about AI's potential value, Dori recognizes that AI and all technological advancements must be approached with mindfulness and responsibility, especially when our members' privacy is at stake.
When ChatGPT became easier to deploy at scale across large organizations, with options for cloud-based services and applications that could be integrated into existing company infrastructure, it became widely adopted. CareFirst responded cautiously, preemptively blocking user access to the application, formulating policies and establishing a multidisciplinary AI Task Force that included involvement from cybersecurity, vendor management, risk assessment, legal, ethics and compliance departments.
"As technology evolves rapidly, it's essential to ensure each use case is accompanied by proper guardrails and a comprehensive risk evaluation to maintain trust, integrity and value," says Dori.
The framework we’ve implemented at CareFirst is upheld by five specific criteria:
- Operational and Economic Impact/Value: A company must assess the potential for technological disruption to jobs, markets or industries. A thorough risk assessment includes an analysis of the economic effects of AI adoption, including the risks of shifts in skill demands and market competition. Then, considering all that, we evaluate the true demonstration of value.
- Transparency and Explainability: We need understandable and interpretable results. Evaluating algorithm transparency ensures that stakeholders can trace how decisions are made.
- Ethical Implication and Bias: Human oversight is necessary. AI systems may unintentionally perpetuate biases or unethical behavior. A key criterion is evaluating how AI models are designed to mitigate bias and ensure fairness across all demographics, avoiding discriminatory outcomes.
- Data Privacy and Security: High data integrity standards must be maintained. AI often relies on large datasets; the risk of data breaches or misuse of personal information is a concern. We must assess how technology handles data security, encryption and compliance with privacy regulations.
- Regulatory and Compliance Risks: AI challenges existing legal frameworks. To perform a complete risk framework evaluation is multi-faceted. We must complete assessments within the areas outlined above. We also must look at the alignment of AI systems with current regulations, anticipating changes and ensuring compliance with future governance structures.
"We aim to persist in our learning journey without falling behind in this era of transformation, always conscious of the potential hazards associated with AI-generated content such as hallucinations, deepfakes, manipulations and biases. Embracing proactive and protective strategies is critical as we navigate these advances," Dori concludes.