4. Considerations for AI in pharmacy practice
Development and Collaboration
Development of AI tools in pharmacy practice to improve the safe and effective use of medicines must be co-produced with pharmacists, data scientists, developers, and patients.
By seeking professional connections with data scientists, teams will be able to identify the problems that require a digital solution and consider whether an AI tool is the appropriate route, delivering improved outcomes and value for money. AI systems should be built following clinical guidelines and peer-reviewed research. If developing a new model, co-producing the prototype and supporting the testing phases with real data is an important factor in ensuring the tool is fit for purpose and can be manually validated.
Depending on the processes involved, the pharmacy and AI development team may be working within a wider multidisciplinary team to ensure unintended impacts of the technology are identified and mitigated and that the application meets real-world needs.
Information quality, security, standards, and governance must be adhered to in the development and deployment of AI technologies in pharmacy practice.
To be able to validate outputs, it is important that we can critically appraise AI tools, with a focus on explainability, to reduce the risk of the “black box” phenomenon where we are unable to ascertain how the tool reached the output from the original data input.
A risk of adopting AI models in work processes is the incorrect or misleading results that AI models generate, called AI hallucinations. One way is to consider how the tool is trained, i.e., on which dataset and ensuring that there is a valid evidence base for the tool under consideration. The House of Commons Science, Innovation and Technology Committee published an interim report12 into the governance of AI which describes 12 challenges, including access to data, bias, liability, and privacy.
Deployment of AI must be used to reduce health inequalities and not to widen them. Mitigation against bias within the data sets used to train AI tools is crucial. Most large language models are trained on vast, unstructured data from the internet; efforts should be made to balance data sets to better represent underrepresented populations. The Professional Records Standards Body (PRSB), in its position statement on AI and Health Information Standards13, emphasises the importance of rigorous health and social care information standards as well as upholds the need for standards that respect confidentiality and consent, promote transparency, and provide clarity on the accountability for AI outputs and clinical decisions based on these outputs.
Regulatory risks
Generative AI tools may inadvertently collect and store sensitive data without legal authorisation. This poses a risk, as such data could be used to train future AI models, thereby introducing unprecedented challenges to data security and privacy, including intellectual property. Privacy and data governance should ensure government and best practice data security principles are not breached by the adoption of AI. Utilising free off-the-shelf AI tools can expose organisations to regulatory risks related to data collection and usage that may not apply to proprietary tools, where more stringent data policies are enforced through subscription or licensing agreements.
Third party AI suppliers should be investigated and monitored to ensure standards, ethics and governance align to your own. Users should not share personal or sensitive data into any AI tool unless there is consent from patients (whose data is being shared with the third party) and the necessary corporate agreements are in place.