AI ไม่ใช่สิทธิเฉพาะของบริษัทเทคโนโลยีขนาดใหญ่อีกต่อไป! Tether เปิดตัว QVAC แล้ว สมัยที่ทุกคนมี LLM ของตัวเองมาถึงแล้วเหรอ?

Stablecoin issuer Tether today (17th) announced a major technological breakthrough for its AI infrastructure QVAC Fabric: the world’s first support for a cross-platform BitNet LoRA fine-tuning framework, allowing large language models that previously required enterprise-level GPUs and cloud computing power, such as training and inference, to now be completed on consumer-grade hardware, including smartphones.

Smartphones can also train LLMs: 1B models completed within 1 hour

According to data released by Tether, the framework has successfully achieved BitNet model fine-tuning on various devices, including common models like Samsung S25 and iPhone 16.

Samsung S25 (Adreno GPU):

  • 125 million parameter model: about 10 minutes for fine-tuning

  • 1 billion parameter model: about 1 hour 18 minutes

iPhone 16 (Apple GPU):

  • 1 billion parameter model: about 1 hour 45 minutes

Extreme tests can fine-tune models with up to 13 billion parameters

Previously, AI training tasks executed on high-end NVIDIA GPUs have been compressed to edge devices like smartphones.

Key technology BitNet + LoRA: slashing AI costs dramatically

The core of this breakthrough lies in the combination of two technologies:

BitNet (1-bit LLM)

Compresses traditional high-precision weights into just -1, 0, 1, greatly reducing memory and computation requirements.

LoRA (Low-Rank Adaptation)

Trains only a small number of parameters (reducing training volume by up to 99%), significantly lowering fine-tuning costs.

Together, they enable models to run in extremely low-resource environments.

Practical tests show BitNet-1B uses 77.8% less VRAM than Gemma-3-1B and 65.6% less VRAM than Qwen3-0.6B. Under the same hardware, approximately twice the size models can be operated.

GPU unlocks AI on smartphones: performance boosted up to 11 times

Another key breakthrough of QVAC is enabling BitNet to run truly on “non-NVIDIA” ecosystems. Supporting GPUs from AMD, Intel, Apple Silicon, and even mobile GPUs: Adreno, Mali, Apple Bionic.

Large language models are no longer the patent of tech giants; AI can now be decentralized

Tether CEO Paolo Ardoino stated: “Intelligence will be a key determinant of future social development. It has the potential to enhance social stability, serve as a bridge connecting society, or further empower a few elites. The future of artificial intelligence should be accessible, usable, and available to everyone, not monopolized by a few cloud service providers with enormous resources.”

Traditional AI development heavily relies on cloud and large GPU clusters, which are costly and concentrated among a few tech giants. Tether’s QVAC platform supports meaningful large-scale model training on consumer-grade hardware, including smartphones, demonstrating that advanced AI can be decentralized and inclusive. In the coming months, continued investment of resources and funds will ensure AI can be used anytime and anywhere on local devices.

This article: AI is no longer the patent of tech giants! Tether launches QVAC, is the era of everyone having an LLM here? Originally published on Chain News ABMedia.

ดูต้นฉบับ
news.article.disclaimer
แสดงความคิดเห็น
0/400
ไม่มีความคิดเห็น