I’m currently researching AI-based methods for personality trait recognition, and my team recently completed a systematic review of the literature on this topic, which was published in Neurocomputing (April 2025).
We reviewed over 100 studies that applied machine learning and deep learning techniques to recognize personality traits from text, voice, video, and social media data. Despite the growing interest, we noticed significant fragmentation in the field:
Different modalities use very different models (e.g., BERT for text, CNN-LSTM for video, or wav2vec2 for speech).
There’s no unified taxonomy for model selection or dataset alignment.
Many works overlook cross-modal fusion or transfer learning, even though personality is inherently multimodal.
Our review aims to provide:
A practical taxonomy of ML/AI models for personality recognition.
A critical analysis of multimodal datasets and their limitations.
Guidelines for selecting suitable AI methods for specific modalities.
You can access the article here: AI Methods for Personality Traits Recognition: A Systematic Review https://doi.org/10.1016/j.neucom.2025.130301
❓ My question to the community: What ML/DL models or frameworks have you found most effective or promising for multimodal personality recognition?
Are there any emerging approaches (e.g., foundation models, graph neural networks, transformers with multimodal fusion) that you believe can generalize across modalities?
I'd love to hear your thoughts or experiences if you’ve worked in this area. Feel free to critique or add to our taxonomy suggestions too!
Thanks in advance.