Hello everyone,
I would like to share my recent paper and Proof of Concept on a method I call “Delta Compression.”
Abstract
We propose “Delta Compression,” a novel approach to information compression designed to overcome the context length limitations of Large Language Models (LLMs). This paradigm represents information not as static data, but as a dynamic “parameter delta” that functionally alters a shared, deterministic base model. It utilizes a hypernetwork encoder to transform input information into a PEFT-style delta, enabling lossless retrieval of complex information from a compact, prompt-addressable representation.
PoC Summary
In our proof-of-concept, the following was observed:
- 70 distinct Japanese texts (totaling approx. 70,000 tokens, 416KB as uncompressed files) were encoded into a minimal LoRA delta.
- The resulting delta file has a size of approx. 396KB (rank 1, float16).
- It was confirmed that the original information could be losslessly restored from this delta in response to individual prompts.
Links
Paper (Zenodo): https://zenodo.org/records/15876304
Code (GitHub): https://github.com/BK7195/delta-compression-poc (The repository includes a Colab notebook for reproducibility.)
Feedback & Discussion
Any thoughts or feedback on this research would be greatly appreciated.
Regarding arXiv Endorsement
I am an independent researcher seeking an endorsement to submit this paper to arXiv. For those who are eligible and willing to endorse, here is the formal request generated by arXiv:
Kazunori Kondo requests your endorsement to submit an article to the cs.CL section of arXiv. To tell us that you would (or would not) like to endorse this person, please visit the following URL:
If that URL does not work for you, please visit
and enter the following six-digit alphanumeric string:
Endorsement Code: ZNILHY
Thank you for your time and consideration.