A note on certain implications of clinical artificial intelligences for the field of medico-legal semiotics
Journal cover Comparative Legilinguistics, volume 62, year 2025
PDF

Keywords

semiotics
biomedicine
AI
clinical
XAI
explainability
medicine

How to Cite

Senechal, C., & Léger-Riopel, N. (2025). A note on certain implications of clinical artificial intelligences for the field of medico-legal semiotics . Comparative Legilinguistics, 62, 175–185. https://doi.org/10.14746/cl.2024.62.4

Abstract

Artificial intelligence has profound implications for the filed of clinical practices, and also for semiotics and law. In this article, we articulate and explain the different types of Clinical artificial intelligence (CAIs) as their normativity often stems from their type (symbolic or connectionist) (Harnad, 1990), and relative autonomy/agency. Older, symbolic AI, while more explainable, did not offer the potential that offer the current, second generation CAIs. The intelligibility of the reasoning used by CAIs remains largely opaque and generally unintelligible and unexplainable for human interpreters, even sometimes counter-factual (Lee & Topol, 2024). This is also true of the most recent so-called “explainable” AIs, that remains imperfect and only very partially explainable (Reddy, 2022). The most recent literature reveals that the very question of AI explainability continues to be one of the most heavily debated concerning CAIs (Hildt, 2025).  In this article, we will reveal that the solution to the black-box problem of CAIs resides in an investigation in the (bio)semiotic nature of both CAIs themselves, but also the problem that surround their explainability. We conclude with solutions to promote transparency in the use of CAIs.

https://doi.org/10.14746/cl.2024.62.4
PDF

References

The Lancet Digital Health (2019). Walking the tightrope of artificial intelligence guidelines in clinical practice. The Lancet. Digital health, 1(3), e100. DOI: https://doi.org/10.1016/S2589-7500(19)30063-9

Amann, J. A., Blasimme, E., Vayena, D. & Freym V. I. Madai (on behalf of the Precise4Q Consortium) (2020). Explainability for artificial intelligence in healthcare: a multidisciplinary perspective. BMC Med Inform Decis Mak 20, 310. DOI: https://doi.org/10.1186/s12911-020-01332-6

Andersen, R.S., M. T. Høybye, and Risør, M.B. (2024). Expanding Medical Semiotics. Medical Anthropology 43(2), 91–101. DOI: https://doi.org/10.1080/01459740.2024.2324892

Bi, W.L. et al. (2019). Artificial intelligence in cancer imaging: Clinical challenges and applications. CA Cancer Journal Clinicians 69(2), 127–157. DOI: https://doi.org/10.3322/caac.21552

Burnum, J.F. (1993). Medical diagnosis through semiotics. Giving meaning to the sign. Annual Internal Medicine 119(9), 939–943. DOI: https://doi.org/10.7326/0003-4819-119-9-199311010-00012

Busnatu, Ş, Niculescu, A.-G., Bolocan, A., Petrescu, G. E. D., Păduraru, D. N., Năstasă, I., Lupușoru, M., Geantă, M., Andronic, O., Grumezescu, A. M., & Martins, H. (2022). Clinical Applications of Artificial Intelligence – An Updated Overview. Journal of Clinical Medicine 11(8), 2265. DOI: https://doi.org/10.3390/jcm11082265

Davenport, T., & Kalakota, R. (2019). The potential for artificial intelligence in healthcare. Future Healthcare Journal 6(2), 94–98. DOI: https://doi.org/10.7861/futurehosp.6-2-94

Davitti, E. (2019). Methodological explorations of interpreter-mediated interaction: novel insights from multimodal analysis. Qualitative Research 19(1), 7–29. DOI: https://doi.org/10.1177/1468794118761492

Fodor, J.A., & Pylyshyn, Z.W. (1988). Connectionism and cognitive architecture: A critical analysis. Cognition 28(1-2), 3–71. DOI: https://doi.org/10.1016/0010-0277(88)90031-5

Glicksberg, B.S., Timsina, P., Pate, D., Sawant, A., Vaid, A., Raut, G., Charney, A. W., Apakama, D., Carr, B.G., Freeman, R., Nadkarni, G.N., & Klang, E. (2024). Evaluating the accuracy of a state-of-the-art large language model for prediction of admissions from the emergency room. Journal of the American Medical Informatics Association 31(9), 1921–1928. DOI: https://doi.org/10.1093/jamia/ocae103

Goldberg, C. B., Adams, L., Blumenthal, D., Flatley Brennan, P., Brown, N., Butte, A. J., Cheatham, M., deBronkart, D., Dixon, J., Drazen, J., Evans, B. J., Hoffman, S. M., Holmes, C., Lee, P., Manrai, A.K., Omenn, G. S., Perlin, J. B., Ramoni, R., Sapiro… Kohane, I. S. (2024). To do no harm — and the most good — with AI in health care. Nature Medicine 30(3), 623–627. DOI: https://doi.org/10.1056/AIp2400036

Harnad, S. (1990). The symbol grounding problem. Physica D: Nonlinear Phenomena 42(1–3), 335–346. DOI: https://doi.org/10.1016/0167-2789(90)90087-6

Hastings, J. (2024). Preventing harm from non-conscious bias in medical generative AI. The Lancet Digital Health 6(1), e2–e3. DOI: https://doi.org/10.1016/S2589-7500(23)00246-7

Hildt, E. (2025). What Is the Role of Explainability in Medical Artificial Intelligence? A Case-Based Approach. Bioengineering 2025, 12, 375. DOI: https://doi.org/10.3390/bioengineering12040375

Johnson, A. E., Brewer, L. C., Echols, M. R., Mazimba, S., Shah, R. U., & Breathett, K. (2022). Utilizing Artificial Intelligence to Enhance Health Equity Among Patients with Heart Failure. Heart Fail Clinics 18(2), 259–273. DOI: https://doi.org/10.1016/j.hfc.2021.11.001

Kuperman, V., &Zislin, J. (2005). Semiotic perspective of psychiatric diagnosis. Semiotica, 155, 1–13. DOI: https://doi.org/10.1515/semi.2005.2005.155.1-4.1

Kwiatkowska, M. & Kielan, K. (2013). Fuzzy logic and semiotic methods in modeling of medical concepts. Fuzzy Sets and Systems 214, 35–50. DOI: https://doi.org/10.1016/j.fss.2012.03.011

Lee, S.-I., & Topol, E.J. (2024). The clinical potential of counterfactual AI models. The Lancet Digital Medicine 403(10428), 717. DOI: https://doi.org/10.1016/S0140-6736(24)00313-1

Longoni, C. and Morewedge, C. (2019). AI Can Outperform Doctors. So Why Don’t Patients Trust It? Harvard Business Review. https://hbr.org/2019/10/ai-can-outperform-doctors-so-why-dont-patients-trust-it

Maxwell, Y. L. (2024). AHA Sums Up AI’s Potential in Cardiology, but Also the Hurdles Ahead. TCTMD. https://www.tctmd.com/news/aha-sums-ais-potential-cardiology-also-hurdles-ahead

Mennella, C., Maniscalco, U., De Pietro, G., & Esposito, M. (2024). Ethical and regulatory challenges of AI technologies in healthcare: A narrative review. Heliyon 10(4), e26297. DOI: https://doi.org/10.1016/j.heliyon.2024.e26297

Nessa, J. (1996). About signs and symptoms: can semiotics expand the view of clinical medicine? Theoretical Medicine 17, 363–377. DOI: https://doi.org/10.1007/BF00489681

Nowak, E. (2019). Multiculturalism, Autonomy, and Language Preservation. Ergo: An Open Access Journal of Philosophy 6(11). DOI: https://doi.org/10.3998/ergo.12405314.0006.011

Oliver, K., & Pearce, W. (2017). Three lessons from evidence-based medicine and policy: increase transparency, balance inputs and understand power. Palgrave Communications 3, 43. DOI: https://doi.org/10.1057/s41599-017-0045-9

Palaniappan, K., Lin, E. Y. T., &Vogel, S. (2024). Global Regulatory Frameworks for the Use of Artificial Intelligence (AI) in the Healthcare Services Sector. Healthcare (Basel) 12(5), 562. DOI: https://doi.org/10.3390/healthcare12050562

Porcino, A., & MacDougall, C. (2009). The Integrated Taxonomy of Health Care: Classifying Both Complementary and Biomedical Practices Using a Uniform Classification Protocol. International Journal of Therapeutic Massage & Bodywork 2(3), 18–30. DOI: https://doi.org/10.3822/ijtmb.v2i3.40

Quer, G., & Topol, E. J. (2024). The potential for large language models to transform cardiovascular medicine. The Lancet: Artificial Intelligence and Digital Innovaions in Cardiovascular Care 6(10), e767–e771. DOI: https://doi.org/10.1016/S2589-7500(24)00151-1

Ratnani, I., Fatima, S., Mohsin Abid, M., Surani, Z., & Surani, S. (2023). Evidence- Based Medicine: History, Review, Criticisms, and Pitfalls. Cureus, 15(2), Article e35266. DOI: https://doi.org/10.7759/cureus.35266

Reddy, S. (2022). Explainability and artificial intelligence in medicine. The Lancet Digital Health 4(4), E214–E215. https://www.thelancet.com/journals/landig/article/PIIS2589-7500(22)00029-2/fulltext DOI: https://doi.org/10.1016/S2589-7500(22)00029-2

Ruschemeier, H. (2023). AI as a challenge for legal regulation – the scope of application of the artificial intelligence act proposal. ERA Forum 23, 361–376. DOI: https://doi.org/10.1007/s12027-022-00725-6

Skalidis, I., Cagnina, A. & Fournier, S. (2023). Use of large language models for evidence-based cardiovascular medicine. European Heart Journal- Digital Health 4(5), 368–369. DOI: https://doi.org/10.1093/ehjdh/ztad041

The CONSORT-AI & SPIRIT-AI Steering Group. (2019). Reporting guidelines for clinical trials evaluating artificial intelligence interventions are needed. Nature Medicine 25, 1467–1468. DOI: https://doi.org/10.1038/s41591-019-0603-3

Thibault, P. J. (1988). Re-reading Saussure. The Dynamics of Signs in Social Life. Routledge.