Instructions to use ValiantLabs/CodeLlama-70B-Esper with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use ValiantLabs/CodeLlama-70B-Esper with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="ValiantLabs/CodeLlama-70B-Esper") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("ValiantLabs/CodeLlama-70B-Esper") model = AutoModelForCausalLM.from_pretrained("ValiantLabs/CodeLlama-70B-Esper") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use ValiantLabs/CodeLlama-70B-Esper with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "ValiantLabs/CodeLlama-70B-Esper" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "ValiantLabs/CodeLlama-70B-Esper", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/ValiantLabs/CodeLlama-70B-Esper
- SGLang
How to use ValiantLabs/CodeLlama-70B-Esper with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "ValiantLabs/CodeLlama-70B-Esper" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "ValiantLabs/CodeLlama-70B-Esper", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "ValiantLabs/CodeLlama-70B-Esper" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "ValiantLabs/CodeLlama-70B-Esper", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use ValiantLabs/CodeLlama-70B-Esper with Docker Model Runner:
docker model run hf.co/ValiantLabs/CodeLlama-70B-Esper
Esper-70b is the DevOps code specialist!
- Overall code capabilities with a DevOps focus: specialized in scripting language code, Terraform files, Dockerfiles, YAML, and more!
- Also trained on further code-instruct and chat-instruct data for generally improved chat quality.
- Built on llama-2-70b architecture, using CodeLlama-70b-Instruct-hf as the base model.
(If you're looking for a friendly general-purpose chat model, try ours: llama-13b and 70b)
Version
This is Version 1.0 of Esper-70b.
The current version of Esper-70b uses CodeLlama-70b-Instruct-hf trained on two sets of data:
- code from bigcode/the-stack-dedup, with our sub-selection focused on scripting languages, Terraform/build scripts, and YAML files.
- our private data for general code-instruct performance, chat-quality response, and user satisfaction. (A portion of this data was also used in Shining Valiant 1.4, our previous general-purpose Llama 70b finetune.)
Esper-70b is the newest release in our Build Tools campaign, to deliver helpful open source capabilities for users and creators. We're working on more tools to come! For everyone to use :)
We're planning on continually upgrading this model with more data, to improve existing capabilities and add new ones relevant to a DevOps user base.
Prompting Guide
Esper-70b uses the following recommended chat format, based on CodeLlama-70b chat format:
Source: system\n\n You are Esper, an expert technical assistant AI. Provide high quality code to the user. Source: user\n\n Hi! Can you explain this Terraform code, thank you:
(Generally, anything that works with CodeLlama-70b-Instruct-hf will work with Esper-70b.)
Esper-70b is created by Valiant Labs.
Try our flagship chat model, Shining Valiant!
Check out our function-calling model Fireplace for Llama-13b!
We care about open source. For everyone to use.
We encourage others to finetune further from our models.
- Downloads last month
- 10

