Skip navigation
Please use this identifier to cite or link to this item: https://repository.esi-sba.dz/jspui/handle/123456789/615
Title: Assessing the trust in male and female advisors during recommendation processes.
Authors: BOULANOUAR, AMina TAssenime
Keywords: Natural Language Processing
Large Language Models
Chatbots
Data Extraction
Web Scraping
Data Preprocessing
Data Augmentation
Machine Learning
Transformers
BERT
ROBERTA
ChatGPT
Sentiment Analysis
Bias
Gender
Deep Learning
Issue Date: 2024
Abstract: The rapid advancement of Artificial Intelligence (AI) and its subfields, including Machine Learning (ML) and Deep Learning (DL), has revolutionized numerous aspects of modern society. These technologies, particularly Natural Language Processing (NLP), have enabled the analysis and understanding of human language at an unprecedented scale. One critical application of NLP is in examining societal issues such as gender bias, which continues to permeate various domains including political, educational, and scientific fields. This thesis explores the dynamics of trust and preference towards male and female advisors during recommendation processes on online platforms such as Quora, Twitter, and Stack Overflow. Utilizing advanced NLP techniques, we conducted sentiment analysis and topic modeling to investigate gender dominance and trust patterns. The datasets were split into 80% training and 20% testing sets, and Synthetic Minority Over-sampling Technique (SMOTE) was employed in combination with k-fold cross-validation (n-split=5) to balance the distribution of gender-related data points, ensuring robustness and fairness in the analysis. Our findings reveal significant variations in sentiment and trust dynamics across platforms and topics. Sentiment analysis using VADER, TextBlob, BERT, and RoBERTa models provided diverse insights into gender-related sentiment distributions. Topic modeling highlighted the gender proportion in various subjects, with sentiment distri- bution and trust analysis offering further granularity. Classification results using different ML models showcased varying accuracies, emphasizing the importance of model selection in gender bias studies.
Description: Supervisor : Dr. BENABDERRAHMANE Sid Ahmed Co-Supervisor : Pr. BENSLIMANE Sidi Mohammed
URI: https://repository.esi-sba.dz/jspui/handle/123456789/615
Appears in Collections:Ingenieur

Files in This Item:
File Description SizeFormat 
PFE_FinalVersion_BOULANOUAR-1-1.pdf51,72 kBAdobe PDFView/Open
Show full item record


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.