DC Field | Value | Language |
dc.contributor.author | BOULANOUAR, AMina TAssenime | - |
dc.date.accessioned | 2024-09-23T08:29:44Z | - |
dc.date.available | 2024-09-23T08:29:44Z | - |
dc.date.issued | 2024 | - |
dc.identifier.uri | https://repository.esi-sba.dz/jspui/handle/123456789/615 | - |
dc.description | Supervisor : Dr. BENABDERRAHMANE Sid Ahmed Co-Supervisor : Pr. BENSLIMANE Sidi Mohammed | en_US |
dc.description.abstract | The rapid advancement of Artificial Intelligence (AI) and its subfields, including Machine
Learning (ML) and Deep Learning (DL), has revolutionized numerous aspects of
modern society. These technologies, particularly Natural Language Processing (NLP),
have enabled the analysis and understanding of human language at an unprecedented
scale. One critical application of NLP is in examining societal issues such as gender
bias, which continues to permeate various domains including political, educational, and
scientific fields.
This thesis explores the dynamics of trust and preference towards male and female
advisors during recommendation processes on online platforms such as Quora,
Twitter, and Stack Overflow. Utilizing advanced NLP techniques, we conducted sentiment
analysis and topic modeling to investigate gender dominance and trust patterns.
The datasets were split into 80% training and 20% testing sets, and Synthetic Minority
Over-sampling Technique (SMOTE) was employed in combination with k-fold
cross-validation (n-split=5) to balance the distribution of gender-related data points,
ensuring robustness and fairness in the analysis.
Our findings reveal significant variations in sentiment and trust dynamics across
platforms and topics. Sentiment analysis using VADER, TextBlob, BERT, and RoBERTa
models provided diverse insights into gender-related sentiment distributions. Topic
modeling highlighted the gender proportion in various subjects, with sentiment distri-
bution and trust analysis offering further granularity. Classification results using different
ML models showcased varying accuracies, emphasizing the importance of model
selection in gender bias studies. | en_US |
dc.language.iso | en | en_US |
dc.subject | Natural Language Processing | en_US |
dc.subject | Large Language Models | en_US |
dc.subject | Chatbots | en_US |
dc.subject | Data Extraction | en_US |
dc.subject | Web Scraping | en_US |
dc.subject | Data Preprocessing | en_US |
dc.subject | Data Augmentation | en_US |
dc.subject | Machine Learning | en_US |
dc.subject | Transformers | en_US |
dc.subject | BERT | en_US |
dc.subject | ROBERTA | en_US |
dc.subject | ChatGPT | en_US |
dc.subject | Sentiment Analysis | en_US |
dc.subject | Bias | en_US |
dc.subject | Gender | en_US |
dc.subject | Deep Learning | en_US |
dc.title | Assessing the trust in male and female advisors during recommendation processes. | en_US |
dc.type | Thesis | en_US |
Appears in Collections: | Ingenieur
|