bartowski releases IQ4_NL quantized GGUF file based on Gemma-4 26B-A4B-it MoE

robot
Abstract generation in progress

ME News message, April 4 (UTC+8). Recently, user @outsource_ posted a message saying that the IQ4_NL quantized GGUF file based on the Google Gemma-4 26B-A4B-it MoE model has been released. The model has a total of about 26 billion parameters, with about 4 billion parameters activated. The quantized file was made by bartowski, using llama.cpp’s imatrix for revision and quantization. The quantized file name is gemma-4-26B-A4B-it-IQ4_NL.gguf, and its size is 14.70 GB. (Source: InFoQ)

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin