Bias in human data: A feedback from social sciences

dc.authorid Ergun, Duygu/0000-0002-5639-8615
dc.authorscopusid 54586197500
dc.authorscopusid 58099851400
dc.authorscopusid 42961488200
dc.authorscopusid 55807841400
dc.authorwosid Kilincceker, Onur/KHV-3755-2024
dc.contributor.author Takan, Savas
dc.contributor.author Ergun, Duygu
dc.contributor.author Yaman, Sinem Getir
dc.contributor.author Kilincceker, Onur
dc.date.accessioned 2024-07-05T15:25:18Z
dc.date.available 2024-07-05T15:25:18Z
dc.date.issued 2023
dc.department Atılım University en_US
dc.department-temp [Takan, Savas] Ankara Univ, Artificial Intelligence & Data Engn, Ankara, Turkiye; [Ergun, Duygu] Atilim Univ, Fine Arts Design & Architecture, Ankara, Turkiye; [Yaman, Sinem Getir] Ege Univ, Int Comp Inst, Izmir, Turkiye; [Yaman, Sinem Getir] Univ York, Dept Comp Sci, York, N Yorkshire, England; [Kilincceker, Onur] Paderborn Univ, Paderborn, Germany; [Kilincceker, Onur] Univ Antwerp, Antwerp, Belgium en_US
dc.description Ergun, Duygu/0000-0002-5639-8615 en_US
dc.description.abstract The fairness of human-related software has become critical with its widespread use in our daily lives, where life-changing decisions are made. However, with the use of these systems, many erroneous results emerged. Technologies have started to be developed to tackle unexpected results. As for the solution to the issue, companies generally focus on algorithm-oriented errors. The utilized solutions usually only work in some algorithms. Because the cause of the problem is not just the algorithm; it is also the data itself. For instance, deep learning cannot establish the cause-effect relationship quickly. In addition, the boundaries between statistical or heuristic algorithms are unclear. The algorithm's fairness may vary depending on the data related to context. From this point of view, our article focuses on how the data should be, which is not a matter of statistics. In this direction, the picture in question has been revealed through a scenario specific to "vulnerable and disadvantaged" groups, which is one of the most fundamental problems today. With the joint contribution of computer science and social sciences, it aims to predict the possible social dangers that may arise from artificial intelligence algorithms using the clues obtained in this study. To highlight the potential social and mass problems caused by data, Gerbner's "cultivation theory" is reinterpreted. To this end, we conduct an experimental evaluation on popular algorithms and their data sets, such as Word2Vec, GloVe, and ELMO. The article stresses the importance of a holistic approach combining the algorithm, data, and an interdisciplinary assessment.This article is categorized under:Algorithmic Development > Statistics en_US
dc.identifier.citationcount 0
dc.identifier.doi 10.1002/widm.1498
dc.identifier.issn 1942-4787
dc.identifier.issn 1942-4795
dc.identifier.issue 4 en_US
dc.identifier.scopus 2-s2.0-85153495825
dc.identifier.scopusquality Q1
dc.identifier.uri https://doi.org/10.1002/widm.1498
dc.identifier.uri https://hdl.handle.net/20.500.14411/2530
dc.identifier.volume 13 en_US
dc.identifier.wos WOS:000971417800001
dc.identifier.wosquality Q1
dc.language.iso en en_US
dc.publisher Wiley Periodicals, inc en_US
dc.relation.publicationcategory Diğer en_US
dc.rights info:eu-repo/semantics/closedAccess en_US
dc.scopus.citedbyCount 2
dc.subject artificial intelligence en_US
dc.subject cultivation theory en_US
dc.subject data bias en_US
dc.subject fairness en_US
dc.subject machine learning en_US
dc.subject new media en_US
dc.subject social computing en_US
dc.subject social science en_US
dc.title Bias in human data: A feedback from social sciences en_US
dc.type Review en_US
dc.wos.citedbyCount 1
dspace.entity.type Publication

Files

Collections