Algorithmic systems used in healthcare contexts are primarily developed for the benefit of the public. It is therefore essential that these systems are trusted by the individuals for whose benefit they are deployed. Drawing inspiration from the principles embedded in the testing of the safety, efficacy and effectiveness of new medicinal products, concurrent design engineering and professional certification requirements, the authors propose, for the first time, a preliminary competency-based ‘Algorithmic Ethics’ effectiveness impact assessment framework for developers of AI systems used in healthcare contexts. They concluded that this set of principles should encompass the algorithmic systems ‘production lifecycle’, to guarantee the optimized use of the AI technologies, avoiding biases and discrimination while ensuring the best possible outcomes, simultaneously increasing decision-making capacity and the accuracy of the results. As AI is as good as those who program it and the system in which it operates, the robustness and trustworthiness of their ‘creators’ and ‘deployers’, should be fostered by a certification system guaranteeing the latter’s knowledge and understanding of ethical aspects as well as their competencies in integrating these aspects in trustworthy AI systems when used in healthcare contexts.
Medical Case Reports received 230 citations as per google scholar report