Landmark Attention: Random-Access Infinite Context Length for Transformers
Paper
•
2305.16300
•
Published
This model has been trained using the PEFT LoRA technique with the Landmark Attention method over 200 steps. Model will likely be trained further and updated later on.
Requires trust_remote_code to be set to True. In oobabooga, you can simply add the --trust_remote_code flag.
You will also need to disable the Add the bos_token to the beginning of prompts option in the settings.
You can probably merge the checkpoint with any other LLaMA-based model (provided they're 33B, of course). This repo contains the merged weights, but you can grab the adapter here.
You can find the training code here.