From 300KB to 69KB per Token: How LLM Architectures Solve the KV Cache Problem
(news.future-shock.ai)
35 points
by future-shock-ai
3 days ago |
3 comments
()
()
()