YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

multilingual-summarization-mBart

Model Overview

The multilingual-summarization-mBart model is a fine-tuned mBART-Large model specializing in abstractive summarization across multiple languages. It is capable of taking long-form text (e.g., news articles, reports) in one of 25 supported languages and generating a fluent, concise summary in the specified target language.

Model Architecture

  • Base Model: mBART-Large (Multilingual Bidirectional Auto-Regressive Transformer)
  • Architecture: Sequence-to-Sequence (Encoder-Decoder) Transformer.
    • Encoder: Processes the source text bidirectionally to create a rich contextual representation.
    • Decoder: Generates the summary token by token, conditioned on the encoder output and the previously generated tokens.
  • Fine-tuning: Trained on a massive, parallel corpus of long documents and their corresponding summaries across various domains, with a focus on cross-lingual transfer learning.

Intended Use

  • Cross-lingual Information Retrieval: Quickly generating summaries of foreign-language reports or articles.
  • Content Management: Automating summary generation for large multilingual document libraries.
  • Global News Monitoring: Providing rapid, translated summaries of breaking news from different regions.

Limitations and Ethical Considerations

  • Hallucination: As an abstractive model, it generates novel sentences and can sometimes introduce information not present in the source text (hallucination). Critical information should be verified.
  • Language Fluency: Performance and fluency can vary significantly between the 25 supported languages, often favoring high-resource languages (e.g., English, French, Chinese).
  • Bias Amplification: Summaries tend to reflect and often amplify key themes, potentially magnifying implicit biases or misleading statements from the source document.
  • Input Length: The model has a fixed maximum length (max_position_embeddings=1024). Longer documents must be truncated or segmented, which may result in loss of context.

Example Code

To generate a French summary from an English article:

from transformers import MBartForConditionalGeneration, MBart50TokenizerFast

# Define model and language codes
model_name = "YourOrg/multilingual-summarization-mBart"
model = MBartForConditionalGeneration.from_pretrained(model_name)
tokenizer = MBart50TokenizerFast.from_pretrained(model_name)

# Set source and target language codes
SRC_LANG = "en_XX"
TGT_LANG = "fr_XX"

# English Article
article = "The global shift toward electric vehicles gained significant momentum this quarter, driven by new regulatory mandates in Europe and strong consumer demand in China. Tesla reported record deliveries, while established automakers like Volkswagen and GM announced accelerated phase-out dates for gasoline models. This trend is putting immense pressure on lithium and cobalt supply chains."

# 1. Encode the source text
tokenizer.src_lang = SRC_LANG
encoded_input = tokenizer(article, return_tensors="pt", max_length=1024, truncation=True)

# 2. Generate the summary
generated_ids = model.generate(
    **encoded_input,
    forced_bos_token_id=tokenizer.lang_code_to_id[TGT_LANG],
    max_length=150,
    min_length=20,
    num_beams=4,
)

# 3. Decode the French summary
summary = tokenizer.decode(generated_ids.squeeze(), skip_special_tokens=True)

print(f"Original Text (EN): {article[:50]}...")
print(f"Generated Summary (FR): {summary}")
Downloads last month
27
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support