<?xml version="1.0" encoding="UTF-8"?>
<doi_batch version="5.4.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://www.crossref.org/schema/5.4.0" xsi:schemaLocation="http://www.crossref.org/schema/5.4.0 https://www.crossref.org/schemas/crossref5.4.0.xsd" xmlns:jats="http://www.ncbi.nlm.nih.gov/JATS1" xmlns:fr="http://www.crossref.org/fundref.xsd" xmlns:ai="http://www.crossref.org/AccessIndicators.xsd" xmlns:rel="http://www.crossref.org/relations.xsd" xmlns:mml="http://www.w3.org/1998/Math/MathML">
  <head>
    <doi_batch_id>NONE</doi_batch_id>
    <timestamp>20260108150831140</timestamp>
    <depositor>
      <depositor_name>wseas/wseas</depositor_name>
      <email_address>content-registration-form+ja@crossref.org</email_address>
    </depositor>
    <registrant>content-registration-form</registrant>
  </head>
  <body>
    <journal>
      <journal_metadata>
        <full_title>WSEAS TRANSACTIONS ON COMPUTER RESEARCH</full_title>
        <issn media_type="print">1991-8755</issn>
        <issn media_type="electronic">2415-1521</issn>
      </journal_metadata>
      <journal_article>
        <titles>
          <title>Responsible Machine Learning Deployment: Imperative Framework for Ethical Action</title>
        </titles>
        <contributors>
          <person_name sequence="first" contributor_role="author">
            <given_name>Maikel</given_name>
            <surname>Leon</surname>
            <affiliations>
              <institution>
                <institution_name>Department of Business Technology University of Miami Miami, Florida, USA</institution_name>
              </institution>
            </affiliations>
          </person_name>
        </contributors>
        <jats:abstract xml:lang="en">
          <jats:p>The rapid expansion of Machine Learning (ML) across finance, healthcare, education, and public policy makes ethical oversight an imperative rather than an optional add-on. This paper responds to that urgency by proposing a comprehensive framework grounded in ten principles—accuracy, fairness, accessibility, security, privacy, transparency, accountability, human oversight, sustainability, and harm avoidance—and positioning them within existing international guidelines. Recent scoping reviews have highlighted the lack of consistent evaluation frameworks across domains and have called for systematic approaches to fairness, accountability, transparency, and ethics. Motivated by case studies of algorithmic redlining, dataset bias, hallucinations in large language models, and ecological concerns, we develop a weighted scoring rubric with thresholds to diagnose ethical compliance. We demonstrate the rubric through case studies, illustrating how the scores identify deficiencies and guide mitigation. The proposed framework is built upon the EU AI Act, NIST’s AI Risk Management Framework, UNESCO’s recommendations, and the OECD AI Principles. We reflect on AI’s energy footprint and the so-called “nuclear dependence” argument, and conclude with a roadmap for practitioners.</jats:p>
        </jats:abstract>
        <publication_date media_type="print">
          <month>01</month>
          <day>08</day>
          <year>2026</year>
        </publication_date>
        <publication_date media_type="online">
          <month>01</month>
          <day>08</day>
          <year>2026</year>
        </publication_date>
        <pages>
          <first_page>141</first_page>
        </pages>
        <publisher_item>
          <item_number item_number_type="article_number">12</item_number>
        </publisher_item>
        <ai:program name="AccessIndicators">
          <ai:license_ref>https://creativecommons.org/licenses/by/4.0/deed.en_US</ai:license_ref>
        </ai:program>
        <doi_data>
          <doi>10.37394/232018.2026.14.12</doi>
          <resource>https://wseas.com/journals/cr/2026/a245118-003(2026).pdf</resource>
        </doi_data>
        <citation_list>
          <citation key="ref0">
            <unstructured_citation>M. Ashwin, S. Jha, G. Prasad, and S. Kumar, “Fake it till you make it? ai hallucinations and ethical dilemmas in anesthesia research and practice,” Journal of Anaesthesiology Clinical Pharmacology, vol. 41, no. 3, p. 381–383, Jun. 2025. [Online]. Available: http://dx.doi.org/ 10.4103/joacp.joacp_56_25</unstructured_citation>
          </citation>
          <citation key="ref1">
            <unstructured_citation>A. Singhal, N. Neveditsin, H. Tanveer, and V. Mago, “Toward fairness, accountability, transparency, and ethics in ai for social media and health care: Scoping review,” JMIR Medical Informatics, vol. 12, p. e50048, Apr. 2024. [Online]. Available: http://dx.doi.org/ 10.2196/50048</unstructured_citation>
          </citation>
          <citation key="ref2">
            <unstructured_citation>J. W. Anderson and S. Visweswaran, “Algorithmic individual fairness and healthcare: a scoping review,” JAMIA Open, vol. 8, no. 1, Dec. 2024. [Online]. Available: http://dx.doi.org/10.1093/jamiaopen/ooae149</unstructured_citation>
          </citation>
          <citation key="ref3">
            <unstructured_citation>C. Ferrara, G. Sellitto, F. Ferrucci, F. Palomba, and A. De Lucia, “Fairness-aware machine learning engineering: how far are we?” Empirical Software Engineering, vol. 29, no. 1, Nov. 2023. [Online]. Available: http://dx.doi.org/10.1007/s10664-023-10402-y</unstructured_citation>
          </citation>
          <citation key="ref4">
            <unstructured_citation>Z. Tong, F. Sun, and L. M. Nguyen, Pretraining Data Exposure in Large Language Models: A Survey of Membership Inference, Data Contamination, and Security Implications. Springer Nature Switzerland, Jul. 2025, p. 152–162. [Online]. Available: http://dx.doi.org/ 10.1007/978-3-031-97144-0_14</unstructured_citation>
          </citation>
          <citation key="ref5">
            <unstructured_citation>D. Lupton, “Towards a digital planetary health perspective: generative ai and the digital determinants of health,” Health Promotion International, vol. 40, no. 5, Sep. 2025. [Online]. Available: http://dx.doi.org/10.1093/ heapro/daaf153</unstructured_citation>
          </citation>
          <citation key="ref6">
            <unstructured_citation>M. Leon, “Generative artificial intelligence and prompt engineering: A comprehensive guide to models, methods, and best practices,” Advances in Science, Technology and Engineering Systems Journal, vol. 10, no. 02, p. 01–11, Mar. 2025. [Online]. Available: http://dx.doi.org/10.25046/aj100201</unstructured_citation>
          </citation>
          <citation key="ref7">
            <unstructured_citation>P. Li, J. Yang, M. A. Islam, and S. Ren, “Making ai less “thirsty”,” Communications of the ACM, vol. 68, no. 7, p. 54–61, Jun. 2025. [Online]. Available: http://dx.doi.org/10.1145/3724499</unstructured_citation>
          </citation>
          <citation key="ref8">
            <unstructured_citation>N. T. Nikolinakos, “Eu policy and legal framework for artificial intelligence, robotics and related technologies - the ai act,” Law, Governance and Technology Series, 2023. [Online]. Available: http://dx.doi.org/10.1007/ 978-3-031-27953-9</unstructured_citation>
          </citation>
          <citation key="ref9">
            <unstructured_citation>E. Tabassi, Artificial Intelligence Risk Management Framework (AI RMF 1.0), Jan. 2023. [Online]. Available: http://dx.doi.org/ 10.6028/NIST.AI.100-1</unstructured_citation>
          </citation>
          <citation key="ref10">
            <unstructured_citation>D. E. van Norren, “The ethics of artificial intelligence, unesco and the african ubuntu perspective,” Journal of Information, Communication and Ethics in Society, vol. 21, no. 1, p. 112–128, Dec. 2022. [Online]. Available: http://dx.doi.org/10.1108/JICES-04- 2022-0037</unstructured_citation>
          </citation>
          <citation key="ref11">
            <unstructured_citation>A. Wodi, “Artificial intelligence (ai) governance: An overview,” SSRN Electronic Journal, 2024. [Online]. Available: http: //dx.doi.org/10.2139/ssrn.4840769</unstructured_citation>
          </citation>
          <citation key="ref12">
            <unstructured_citation>J. Zhou, H. Müller, A. Holzinger, and F. Chen, “Ethical chatgpt: Concerns, challenges, and commandments,” Electronics, vol. 13, no. 17, p. 3417, Aug. 2024. [Online]. Available: http: //dx.doi.org/10.3390/electronics13173417</unstructured_citation>
          </citation>
          <citation key="ref13">
            <unstructured_citation>D. Ueda, T. Kakinuma, S. Fujita, K. Kamagata, Y. Fushimi, R. Ito, Y. Matsui, T. Nozaki, T. Nakaura, N. Fujima, F. Tatsugami, M. Yanagawa, K. Hirata, A. Yamada, T. Tsuboyama, M. Kawamura, T. Fujioka, and S. Naganawa, “Fairness of artificial intelligence in healthcare: review and recommendations,” Japanese Journal of Radiology, vol. 42, no. 1, p. 3–15, Aug. 2023. [Online]. Available: http://dx.doi.org/10.1007/s11604-023-01474-3</unstructured_citation>
          </citation>
          <citation key="ref14">
            <unstructured_citation>G. Pennisi, “Operationalization of (trans)gender in facial recognition systems: From binarism to intersectionality,” Future Humanities, vol. 2, no. 3, Jul. 2024. [Online]. Available: http: //dx.doi.org/10.1002/fhu2.17</unstructured_citation>
          </citation>
          <citation key="ref15">
            <unstructured_citation>A. Mergen, N. undefinedetin Kılıç, and M. F. Özbilgin, Artificial Intelligence and Bias Towards Marginalised Groups: Theoretical Roots and Challenges. Emerald Publishing Limited, Apr. 2025, p. 17–38. [Online]. Available: http://dx.doi.org/10.1108/S2051- 233320250000012004</unstructured_citation>
          </citation>
          <citation key="ref16">
            <unstructured_citation>J. Dagdelen, A. Dunn, S. Lee, N. Walker, A. S. Rosen, G. Ceder, K. A. Persson, and A. Jain, “Structured information extraction from scientific text with large language models,” Nature Communications, vol. 15, no. 1, Feb. 2024. [Online]. Available: http://dx.doi.org/ 10.1038/s41467-024-45563-x</unstructured_citation>
          </citation>
          <citation key="ref17">
            <unstructured_citation>M. Han, I. Canli, J. Shah, X. Zhang, I. G. Dino, and S. Kalkan, “Perspectives of machine learning and natural language processing on characterizing positive energy districts,” Buildings, vol. 14, no. 2, p. 371, Jan. 2024. [Online]. Available: http: //dx.doi.org/10.3390/buildings14020371</unstructured_citation>
          </citation>
          <citation key="ref18">
            <unstructured_citation>N. J. Abernethy, “Let stochastic parrots squawk: why academic journals should allow large language models to coauthor articles,” AI and Ethics, vol. 5, no. 5, p. 4535–4553, Sep. 2024. [Online]. Available: http://dx.doi.org/10.1007/ s43681-024-00575-7</unstructured_citation>
          </citation>
          <citation key="ref19">
            <unstructured_citation>V. Bolón-Canedo, L. Morán-Fernández, B. Cancela, and A. Alonso-Betanzos, “A review of green artificial intelligence: Towards a more sustainable future,” Neurocomputing, vol. 599, p. 128096, Sep. 2024. [Online]. Available: http: //dx.doi.org/10.1016/j.neucom.2024.128096</unstructured_citation>
          </citation>
          <citation key="ref20">
            <unstructured_citation>Y. Chen, R. Zhang, J. Lyu, and Y. Hou, “Ai and nuclear: A perfect intersection of danger and potential?” Energy Economics, vol. 133, p. 107506, May 2024. [Online]. Available: http://dx.doi.org/10.1016/j.eneco.2024.107506</unstructured_citation>
          </citation>
          <citation key="ref21">
            <unstructured_citation>M. Leon, “Ai safety practices and public perception: Historical analysis, survey insights, and a weighted scoring framework,” Intelligent Systems with Applications, vol. 28, p. 200583, Dec. 2025. [Online]. Available: http://dx.doi.org/10.1016/j.iswa.2025.200583</unstructured_citation>
          </citation>
          <citation key="ref22">
            <unstructured_citation>S. M. Pressman, S. Borna, C. A. Gomez-Cabello, S. A. Haider, C. Haider, and A. J. Forte, “Ai and ethics: A systematic review of the ethical considerations of large language model use in surgery research,” Healthcare, vol. 12, no. 8, p. 825, Apr. 2024. [Online]. Available: http://dx.doi.org/10.3390/healthcare12080825</unstructured_citation>
          </citation>
          <citation key="ref23">
            <unstructured_citation>S. A. Haider, S. Borna, C. A. Gomez-Cabello, S. M. Pressman, C. R. Haider, and A. J. Forte, “The algorithmic divide: A systematic review on ai-driven racial disparities in healthcare,” Journal of Racial and Ethnic Health Disparities, Dec. 2024. [Online]. Available: http://dx.doi.org/10.1007/s40615-024-02237-0</unstructured_citation>
          </citation>
          <citation key="ref24">
            <unstructured_citation>S. Nasir, R. A. Khan, and S. Bai, “Ethical framework for harnessing the power of ai in healthcare and beyond,” IEEE Access, vol. 12, p. 31014–31035, 2024. [Online]. Available: http: //dx.doi.org/10.1109/ACCESS.2024.3369912</unstructured_citation>
          </citation>
          <citation key="ref25">
            <unstructured_citation>A. Joseph, P. Abril, and A. Del Riego, “Chatgpt, esq.: Recasting unauthorized practice of law in the era of generative ai,” SSRN Electronic Journal, 2025. [Online]. Available: http:// dx.doi.org/10.2139/ssrn.5152523</unstructured_citation>
          </citation>
          <citation key="ref26">
            <unstructured_citation>M. Omar, V. Sorin, J. D. Collins, D. Reich, R. Freeman, N. Gavin, A. Charney, L. Stump, N. L. Bragazzi, G. N. Nadkarni, and E. Klang, “Multi-model assurance analysis showing large language models are highly vulnerable to adversarial hallucination attacks during clinical decision support,” Communications Medicine, vol. 5, no. 1, Aug. 2025. [Online]. Available: http://dx.doi.org/10.1038/s43856-025-01021-3</unstructured_citation>
          </citation>
          <citation key="ref27">
            <unstructured_citation>T. D. Jui and P. Rivas, “Fairness issues, current approaches, and challenges in machine learning models,” International Journal of Machine Learning and Cybernetics, vol. 15, no. 8, p. 3095–3125, Jan. 2024. [Online]. Available: http://dx.doi.org/10.1007/s13042-023-02083-2</unstructured_citation>
          </citation>
          <citation key="ref28">
            <unstructured_citation>C. J. Connolly, D. M. Hueholt, and M. A. Burt, “Datasheets for earth science datasets,” Bulletin of the American Meteorological Society, vol. 106, no. 4, p. E642–E648, Apr. 2025. [Online]. Available: http://dx.doi.org/10.1175/BAMS-D24-0203.1</unstructured_citation>
          </citation>
          <citation key="ref29">
            <unstructured_citation>I. Hupont, D. Fernández-Llorca, S. Baldassarri, and E. Gómez, “Use case cards: a use case reporting framework inspired by the european ai act,” Ethics and Information Technology, vol. 26, no. 2, Mar. 2024. [Online]. Available: http://dx.doi.org/10.1007/s10676-024-09757-7</unstructured_citation>
          </citation>
          <citation key="ref30">
            <unstructured_citation>M. Leon, “The escalating ai’s energy demands and the imperative need for sustainable solutions,” WSEAS TRANSACTIONS ON SYSTEMS, vol. 23, p. 444–457, Dec. 2024. [Online]. Available: http://dx.doi.org/10.37394/ 23202.2024.23.46</unstructured_citation>
          </citation>
          <citation key="ref31">
            <unstructured_citation>C. Y. Elgin and C. Elgin, “Ethical implications of ai-driven clinical decision support systems on healthcare resource allocation: a qualitative study of healthcare professionals’ perspectives,” BMC Medical Ethics, vol. 25, no. 1, Dec. 2024. [Online]. Available: http://dx.doi.org/10.1186/ s12910-024-01151-8</unstructured_citation>
          </citation>
          <citation key="ref32">
            <unstructured_citation>B. C. Cheong, “Transparency and accountability in ai systems: safeguarding wellbeing in the age of algorithmic decision-making,” Frontiers in Human Dynamics, vol. 6, Jul. 2024. [Online]. Available: http://dx.doi.org/10.3389/ fhumd.2024.1421273</unstructured_citation>
          </citation>
          <citation key="ref33">
            <unstructured_citation>H. DeSimone, “Explainable ai: The quest for transparency in business and beyond,” in 2024 7th International Conference on Information and Computer Technologies (ICICT). IEEE, Mar. 2024, p. 532–538. [Online]. Available: http://dx.doi.org/10.1109/ ICICT62343.2024.00093</unstructured_citation>
          </citation>
          <citation key="ref34">
            <unstructured_citation>M. Leon, G. Napoles, M. M. García, R. Bello, and K. Vanhoof, Two Steps Individuals Travel Behavior Modeling through Fuzzy Cognitive Maps Pre-definition and Learning. Springer Berlin Heidelberg, 2011, p. 82–94. [Online]. Available: http://dx.doi.org/10.1007/ 978-3-642-25330-0_8</unstructured_citation>
          </citation>
          <citation key="ref35">
            <unstructured_citation>G. Napoles, “Prolog-based agnostic explanation module for structured pattern classification,” Information Sciences, vol. 622, p. 1196–1227, Apr. 2023. [Online]. Available: http: //dx.doi.org/10.1016/j.ins.2022.12.012</unstructured_citation>
          </citation>
          <citation key="ref36">
            <unstructured_citation>G. Biagini, “Towards an ai-literate future: A systematic literature review exploring education, ethics, and applications,” International Journal of Artificial Intelligence in Education, vol. 35, no. 4, p. 2616–2666, Mar. 2025. [Online]. Available: http://dx.doi.org/ 10.1007/s40593-025-00466-w</unstructured_citation>
          </citation>
          <citation key="ref37">
            <unstructured_citation>P. Spitzer, J. Holstein, K. Morrison, K. Holstein, G. Satzger, and N. Kühl, “Don’t be fooled: The misinformation effect of explanations in human–ai collaboration,” International Journal of Human–Computer Interaction, p. 1–29, Nov. 2025. [Online]. Available: http://dx.doi.org/ 10.1080/10447318.2025.2574511</unstructured_citation>
          </citation>
          <citation key="ref38">
            <unstructured_citation>M. Leon, “Gpt-5 and open-weight large language models: Advances in reasoning, transparency, and control,” Information Systems, vol. 136, p. 102620, Feb. 2026. [Online]. Available: http://dx.doi.org/10.1016/ j.is.2025.102620</unstructured_citation>
          </citation>
        </citation_list>
      </journal_article>
    </journal>
  </body>
</doi_batch>