From 1d16e476368fe00ab55fcb7382aa281f25ea7513 Mon Sep 17 00:00:00 2001 From: Qihang Zhang Date: Sun, 9 Nov 2025 15:41:39 +0800 Subject: [PATCH 1/2] [fix] fix the problem that blog won't automaticlt redirect. --- _posts/2025-10-11-max-ent-rl.md | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/_posts/2025-10-11-max-ent-rl.md b/_posts/2025-10-11-max-ent-rl.md index d6a6567..12d6000 100644 --- a/_posts/2025-10-11-max-ent-rl.md +++ b/_posts/2025-10-11-max-ent-rl.md @@ -15,3 +15,10 @@ authors: name: UBC --- + + + +If you are not redirected automatically, you can read the full post here: +[Why the Exponential? From Max‑Entropy RL to the Boltzmann Distribution](https://qihang-zhang.com/Learning-Sys-Blog/2025/10/06/max-ent-rl-and-boltzmann-distribution.html). From 3b924dead23d4d22b37fe0206bfd51408c3ab6ed Mon Sep 17 00:00:00 2001 From: Qihang Zhang Date: Sun, 9 Nov 2025 16:07:23 +0800 Subject: [PATCH 2/2] [doc] Add the blog of wpoe --- _posts/2025-10-11-max-ent-rl.md | 2 +- _posts/2025-11-09-weighted-poe.md | 25 +++++++++++++++++++++++++ 2 files changed, 26 insertions(+), 1 deletion(-) create mode 100644 _posts/2025-11-09-weighted-poe.md diff --git a/_posts/2025-10-11-max-ent-rl.md b/_posts/2025-10-11-max-ent-rl.md index 12d6000..6912889 100644 --- a/_posts/2025-10-11-max-ent-rl.md +++ b/_posts/2025-10-11-max-ent-rl.md @@ -1,6 +1,6 @@ --- layout: distill -title: Why the Exponential? From Max‑Entropy RL to the Boltzmann Distribution +title: Why the Exponential? From Max‑Entropy RL to the Boltzmann Distribution description: This blog post explores why the exponential function appears ubiquitously across modern RL, energy-based modeling, and statistical mechanics. We examine the connection between max-entropy reinforcement learning and the Boltzmann distribution, uncovering the fundamental principles that make the exponential form inevitable and explaining what "temperature" actually does in these frameworks. tags: reinforcement-learning information-theory boltzmann-distribution giscus_comments: true diff --git a/_posts/2025-11-09-weighted-poe.md b/_posts/2025-11-09-weighted-poe.md new file mode 100644 index 0000000..d851e60 --- /dev/null +++ b/_posts/2025-11-09-weighted-poe.md @@ -0,0 +1,25 @@ +--- +layout: distill +title: Test-Time Steering for Lossless Text Compression via Weighted Product of Experts +description: > + When I was a child, I always wondered: if I keep compressing the same file, will it eventually shrink to nothing? Of course, the answer is no—once a file is optimally compressed by a lossless compressor, compressing it again with the same method gives a file of exactly the same size. Today I know this comes from the fundamental limits of lossless compression in information theory. But what if we use multiple compressors instead of one? If we combine them, can each remove a different part of the data’s redundancy—and how should such a combination be designed? In this blog we discussed the above questions and proposed a method called Weighted Product of Experts. +tags: large-language-models lossless-compression mixture-of-experts information-theory +giscus_comments: true +date: 2025-11-09 +featured: true +redirect: https://qihang-zhang.com/Learning-Sys-Blog/2025/10/15/weighted-product-of-experts.html + +authors: + - name: Qihang Zhang + url: "https://qihang-zhang.com/" + affiliations: + name: UBC + +--- + + + +If you are not redirected automatically, you can read the full post here: +[Test-Time Steering for Lossless Text Compression via Weighted Product of Experts](https://qihang-zhang.com/Learning-Sys-Blog/2025/10/15/weighted-product-of-experts.html).