Skip navigation
Please use this identifier to cite or link to this item: https://repository.esi-sba.dz/jspui/handle/123456789/615
Full metadata record
DC FieldValueLanguage
dc.contributor.authorBOULANOUAR, AMina TAssenime-
dc.date.accessioned2024-09-23T08:29:44Z-
dc.date.available2024-09-23T08:29:44Z-
dc.date.issued2024-
dc.identifier.urihttps://repository.esi-sba.dz/jspui/handle/123456789/615-
dc.descriptionSupervisor : Dr. BENABDERRAHMANE Sid Ahmed Co-Supervisor : Pr. BENSLIMANE Sidi Mohammeden_US
dc.description.abstractThe rapid advancement of Artificial Intelligence (AI) and its subfields, including Machine Learning (ML) and Deep Learning (DL), has revolutionized numerous aspects of modern society. These technologies, particularly Natural Language Processing (NLP), have enabled the analysis and understanding of human language at an unprecedented scale. One critical application of NLP is in examining societal issues such as gender bias, which continues to permeate various domains including political, educational, and scientific fields. This thesis explores the dynamics of trust and preference towards male and female advisors during recommendation processes on online platforms such as Quora, Twitter, and Stack Overflow. Utilizing advanced NLP techniques, we conducted sentiment analysis and topic modeling to investigate gender dominance and trust patterns. The datasets were split into 80% training and 20% testing sets, and Synthetic Minority Over-sampling Technique (SMOTE) was employed in combination with k-fold cross-validation (n-split=5) to balance the distribution of gender-related data points, ensuring robustness and fairness in the analysis. Our findings reveal significant variations in sentiment and trust dynamics across platforms and topics. Sentiment analysis using VADER, TextBlob, BERT, and RoBERTa models provided diverse insights into gender-related sentiment distributions. Topic modeling highlighted the gender proportion in various subjects, with sentiment distri- bution and trust analysis offering further granularity. Classification results using different ML models showcased varying accuracies, emphasizing the importance of model selection in gender bias studies.en_US
dc.language.isoenen_US
dc.subjectNatural Language Processingen_US
dc.subjectLarge Language Modelsen_US
dc.subjectChatbotsen_US
dc.subjectData Extractionen_US
dc.subjectWeb Scrapingen_US
dc.subjectData Preprocessingen_US
dc.subjectData Augmentationen_US
dc.subjectMachine Learningen_US
dc.subjectTransformersen_US
dc.subjectBERTen_US
dc.subjectROBERTAen_US
dc.subjectChatGPTen_US
dc.subjectSentiment Analysisen_US
dc.subjectBiasen_US
dc.subjectGenderen_US
dc.subjectDeep Learningen_US
dc.titleAssessing the trust in male and female advisors during recommendation processes.en_US
dc.typeThesisen_US
Appears in Collections:Ingenieur

Files in This Item:
File Description SizeFormat 
PFE_FinalVersion_BOULANOUAR-1-1.pdf51,72 kBAdobe PDFView/Open
Show simple item record


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.