Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,53 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# NanoGPT-Abstract-Generator
|
| 2 |
+
|
| 3 |
+
## Overview
|
| 4 |
+
`NanoGPT-Abstract-Generator` is a smaller, more efficient version of the GPT-2 architecture fine-tuned for generating concise, high-quality abstracts from a provided input sentence or a short document prompt. It is designed for low-latency inference on general-purpose text generation tasks.
|
| 5 |
+
|
| 6 |
+
This model is a strong choice for applications requiring quick, coherent, and contextually relevant text snippets without the massive computational overhead of larger models like GPT-3 or full-sized GPT-2 variants.
|
| 7 |
+
|
| 8 |
+
## Model Architecture
|
| 9 |
+
The model is based on the **GPT-2** decoder-only architecture, but significantly scaled down for efficiency (hence 'NanoGPT').
|
| 10 |
+
* **Base Model:** GPT-2 Decoder
|
| 11 |
+
* **Task:** Causal Language Modeling (`GPT2LMHeadModel`)
|
| 12 |
+
* **Size Reduction:** $n_{layer}=8$ (vs. 12 for GPT-2 Base), $n_{embd}=768$.
|
| 13 |
+
* **Parameters:** Approximately 100 Million parameters (highly optimized).
|
| 14 |
+
* **Context Window (`n_ctx`):** 512 tokens.
|
| 15 |
+
* **Tokenizer:** GPT-2 Tokenizer (BPE vocabulary, 50257 tokens).
|
| 16 |
+
|
| 17 |
+
## Intended Use
|
| 18 |
+
* **Abstractive Summarization:** Generating short, descriptive summaries (abstracts) for scientific papers, articles, or blog posts based on the first few sentences.
|
| 19 |
+
* **Creative Prompting:** Generating short stories, poem stanzas, or marketing copy from a seed phrase.
|
| 20 |
+
* **Chatbot Responses:** Providing fluent, contextualized, short-form responses in a conversational agent.
|
| 21 |
+
* **Rapid Prototyping:** Serving as a fast, accessible, and resource-friendly generator for local testing and development.
|
| 22 |
+
|
| 23 |
+
## Limitations
|
| 24 |
+
* **Coherence over Long Sequences:** Due to its reduced size and context window (512 tokens), coherence may degrade rapidly for generations exceeding 200 tokens.
|
| 25 |
+
* **Factual Accuracy (Hallucination):** Like all auto-regressive language models, it can generate text that sounds convincing but is factually incorrect or nonsensical.
|
| 26 |
+
* **Safety/Bias:** The model inherits biases present in its pre-training data. Care must be taken in deployment to filter or mitigate harmful outputs.
|
| 27 |
+
|
| 28 |
+
## Example Code (PyTorch/Transformers Pipeline)
|
| 29 |
+
|
| 30 |
+
```python
|
| 31 |
+
from transformers import pipeline
|
| 32 |
+
|
| 33 |
+
model_name = "NLP/NanoGPT-Abstract-Generator"
|
| 34 |
+
# The 'text-generation' pipeline handles the model and tokenizer automatically
|
| 35 |
+
generator = pipeline("text-generation", model=model_name)
|
| 36 |
+
|
| 37 |
+
prompt = "The recent advancements in quantum computing have shifted the paradigm"
|
| 38 |
+
|
| 39 |
+
# Generate text with specific decoding parameters
|
| 40 |
+
output = generator(
|
| 41 |
+
prompt,
|
| 42 |
+
max_length=50,
|
| 43 |
+
num_return_sequences=1,
|
| 44 |
+
temperature=0.7, # Controls randomness
|
| 45 |
+
top_k=50, # Sampling top K tokens
|
| 46 |
+
do_sample=True, # Enable sampling
|
| 47 |
+
pad_token_id=generator.tokenizer.eos_token_id # Set padding to EOS token
|
| 48 |
+
)
|
| 49 |
+
|
| 50 |
+
print(f"Prompt: {prompt}\n--- Abstract ---\n{output[0]['generated_text']}")
|
| 51 |
+
|
| 52 |
+
# Example Output:
|
| 53 |
+
# "The recent advancements in quantum computing have shifted the paradigm of theoretical cryptography, making several historically secure algorithms vulnerable to polynomial-time attacks. Researchers are now prioritizing the development of post-quantum cryptography protocols."
|