Unpacking the Black Box: Exploring the Intersection of Trust and Machine Learning
No Thumbnail Available
Date
2024
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Taylor and Francis
Open Access Color
OpenAIRE Downloads
OpenAIRE Views
Abstract
Artificial intelligence can be defined as the efforts and methods to provide computers and information and communication technology elements with the competencies of human beings, artificially and virtually, in terms of analyzing, synthesizing, interpreting, inferring, thinking, and evaluating. Unlike previous traditional paradigms, artificial intelligence applications operate adaptively, consisting of various feedback loops during their performance to achieve computations with higher accuracy and success. The process and ability of artificial intelligence to be trained using specific methods with given data is called machine learning. As artificial intelligence becomes increasingly intertwined with human life, the issue of trust in this technology has also come to the forefront. In this chapter, we explore the intersection of trust and machine learning, delving into the details of the factors that contribute to trust in this technology and the potential consequences of a lack of trust. © 2025 selection and editorial matter, Joanna Paliszkiewicz and Jerzy Gołuchowski; individual chapters, the contributors.
Description
Keywords
[No Keyword Available]
Turkish CoHE Thesis Center URL
Fields of Science
Citation
WoS Q
Scopus Q
Source
Trust and Artificial Intelligence: Development and Application of Ai Technology
Volume
Issue
Start Page
295
End Page
304