How to use from
vLLM
Install from pip and serve model
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "Gustavosta/MagicPrompt-Dalle"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/completions" \
	-H "Content-Type: application/json" \
	--data '{
		"model": "Gustavosta/MagicPrompt-Dalle",
		"prompt": "Once upon a time,",
		"max_tokens": 512,
		"temperature": 0.5
	}'
Use Docker
docker model run hf.co/Gustavosta/MagicPrompt-Dalle
Quick Links

MagicPrompt - Dall-E 2

This is a model from the MagicPrompt series of models, which are GPT-2 models intended to generate prompt texts for imaging AIs, in this case: Dall-E 2.

πŸ–ΌοΈ Here's an example:

This model was trained with a set of about 26k of data filtered and extracted from various places such as: The Web Archive, The SubReddit for Dall-E 2 and dalle2.gallery. This may be a relatively small dataset, but we have to consider that Dall-E 2 is a closed service and we only have prompts from people who share it and have access to the service, for now. The set was trained with about 40,000 steps and I have plans to improve the model if possible.

If you want to test the model with a demo, you can go to: "spaces/Gustavosta/MagicPrompt-Dalle".

πŸ’» You can see other MagicPrompt models:

βš–οΈ Licence:

MIT

When using this model, please credit: Gustavosta

Thanks for reading this far! :)

Downloads last month
1,692
Safetensors
Model size
0.1B params
Tensor type
F32
Β·
U8
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for Gustavosta/MagicPrompt-Dalle

Quantizations
2 models

Spaces using Gustavosta/MagicPrompt-Dalle 6