cmu-mosei-comp-seq / README.md
reeha-parkar's picture
Create README.md
5f8d513 verified
metadata
license: cc-by-sa-4.0
language:
  - en
tags:
  - multimodal
  - emotion recognition
  - CMU-MOSEI
  - computational-sequences
  - audio
  - video
  - text
pretty_name: 'CMU-MOSEI: Computational Sequences (Unofficial Mirror)'
dataset_info:
  features:
    - name: CMU_MOSEI_COVAREP.csd
      type: binary
    - name: CMU_MOSEI_Labels.csd
      type: binary
    - name: CMU_MOSEI_OpenFace2.csd
      type: binary
    - name: CMU_MOSEI_TimestampedPhones.csd
      type: binary
    - name: CMU_MOSEI_TimestampedWordVectors.csd
      type: binary
    - name: CMU_MOSEI_TimestampedWords.csd
      type: binary
    - name: CMU_MOSEI_VisualFacet42.csd
      type: binary

CMU-MOSEI: Computational Sequences (Unofficial Mirror)

This repository provides a mirror of the official computational sequence files from the CMU-MOSEI dataset, which are required for multimodal sentiment and emotion research. The original download links are currently down, so this mirror is provided for the research community.

Note: This is an unofficial mirror. All data originates from Carnegie Mellon University and original authors. If you are a dataset creator and want this removed or modified, please open an issue.

Dataset Structure

  • CMU_MOSEI_COVAREP.csd: Acoustic features (COVAREP)
  • CMU_MOSEI_Labels.csd: Sentiment/emotion labels and annotations
  • CMU_MOSEI_OpenFace2.csd: Facial features (OpenFace 2.0)
  • CMU_MOSEI_TimestampedPhones.csd: Timestamped phone (phoneme) alignments
  • CMU_MOSEI_TimestampedWordVectors.csd: Timestamped word embeddings (GloVe/Word2Vec)
  • CMU_MOSEI_TimestampedWords.csd: Timestamped word alignments
  • CMU_MOSEI_VisualFacet42.csd: Additional facial/action unit features

All files are in .csd format and can be loaded using the CMU Multimodal SDK.

Usage

from mmsdk import mmdatasdk
# Example: Load the COVAREP file
covarep = mmdatasdk.mmdataset({'covarep': 'CMU_MOSEI_COVAREP.csd'})

Source

License

  • License: CC BY-SA 4.0
  • All data copyright: Carnegie Mellon University & original authors

Citation

If you use these files, please cite the original authors:

@inproceedings{zadeh2018multimodal,
  title={Multimodal Language Analysis in the Wild: CMU-MOSEI Dataset and Interpretable Dynamic Fusion},
  author={Zadeh, Amir and Chen, Minghai and Poria, Soujanya and Cambria, Erik and Morency, Louis-Philippe},
  booktitle={Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)},
  pages={2236--2246},
  year={2018}
}