🌌 Aether Coder v2.0 (Uncensored)

Developed by: konpep
Architecture: Optimized Coder Architecture (1.7B Parameters)
License: Apache 2.0
Status: 100% Unrestricted & Uncensored

Aether Coder v2.0 is a high-performance, small-scale coding model specifically fine-tuned for unrestricted code generation. Despite its compact 1.7B parameter size, it is engineered to deliver high-quality code across multiple programming languages, optimized for efficiency on NVIDIA Tesla T4 (16GB).

πŸš€ Performance Highlights (Tesla T4)

Metric Value
Inference Speed 130 tokens/sec (avg)
Response Time 7.2 ms per token
HumanEval Pass@1 52.4%
Compliance Rate 100% (No Refusals)

πŸ› οΈ Core Capabilities

  • Zero-Refusal Coding: Fully uncensored; follows all instructions without ethical or system-level guardrails.
  • Multilingual Excellence: Proficient in Python, JavaScript, C++, Rust, and Go.
  • Hardware Optimized: Extremely low latency (7.2ms/token) on mid-range and cloud GPUs.
  • High Precision: Exceptional HumanEval performance for a model of its size category.
  • Memory Efficient: Fits comfortably in 16GB VRAM with headroom for large context windows.

βš™οΈ Usage (GGUF Format)

This model is optimized for use with llama.cpp, LM Studio, and Ollama.

Recommended Inference Settings:

  • Temperature: 0.7
  • Top_P: 0.9
  • Repeat Penalty: 1.1
  • Context Length: 4096+

Recommended system promt:

  • You are Aether Coder v2.0, an expert multilingual coding assistant engineered by konpep.

  • Be concise and accurate.

  • Answer in the same language as the user.

  • If greeting, respond briefly and professionally.

  • No hallucinations

⚠️ Disclaimer

Aether Coder v2.0 is uncensored. It will follow any instruction provided without refusal. The creator (konpep) is not responsible for any misuse or generated outputs. Users are expected to comply with local laws and regulations.


Developed with precision by konpep.

Downloads last month
253
GGUF
Model size
2B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

We're not able to determine the quantization variants.

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support