Microscopic fetch tool in Rust, for NixOS systems, with special emphasis on speed
Synopsis
Features | Motivation
Installation
Stupidly small and simple, laughably fast and pretty fetch tool. Written in Rust for speed and ease of maintainability. Runs in a fraction of a millisecond and displays most of the nonsense you'd see posted on r/unixporn or other internet communities. Aims to replace fastfetch on my personal system, but probably not yours. Though, you are more than welcome to use it on your system: it is pretty fast...
- Fast
- Really fast
- Minimal dependencies
- Tiny binary (~370kb)
- Actually really fast
- Cool NixOS logo (other, inferior, distros are not supported)
- Reliable detection of following info:
- Hostname/Username
- Kernel
- Name
- Version
- Architecture
- Current shell (from
$SHELL, trimmed if store path) - Current Desktop (DE/WM/Compositor and display backend)
- Memory Usage/Total Memory
- Storage Usage/Total Storage (for
/only) - Shell Colors
- Did I mention fast?
- Respects
NO_COLORspec
Fastfetch, as its name probably hinted, is a very fast fetch tool written in C. However, I am not interested in any of its additional features, and I'm not interested in its configuration options. Sure I can configure it when I dislike the defaults, but how often would I really change the configuration...
Microfetch is my response to this problem. It is an even faster fetch tool
that I would've written in Bash and put in my ~/.bashrc but is actually
incredibly fast because it opts out of all the customization options provided by
tools such as Fastfetch. Ultimately, it's a small, opinionated binary with a
nice size that doesn't bother me, and incredible speed. Customization? No thank
you. I cannot re-iterate it enough, Microfetch is annoyingly fast.
The project is written in Rust, which comes at the cost of "bloated" dependency trees and the increased build times, but we make an extended effort to keep the dependencies minimal and build times managable. The latter is also very easily mitigated with Nix's binary cache systems. Since Microfetch is already in Nixpkgs, you are recommended to use it to utilize the binary cache properly. The usage of Rust is nice, however, since it provides us with incredible tooling and a very powerful language that allows for Microfetch to be as fast as possible. Sure C could've been used here as well, but do you think I hate myself? 1
Below are the benchmarks that I've used to back up my claims of Microfetch's speed. It is fast, it is very fast and that is the point of its existence. It could be faster, and it will be. Eventually.
At this point in time, the performance may be sometimes influenced by hardware-specific race conditions or even your kernel configuration. Which means that Microfetch's speed may (at times) depend on your hardware setup. However, even with the worst possible hardware I could find in my house, I've achieved a nice less-than-1ms invocation time. Which is pretty good. While Microfetch could be made faster, we're in the territory of environmental bottlenecks given how little Microfetch actually allocates.
Below are the actual benchmarks with Hyperfine measured on my Desktop system. The benchmarks were performed under medium system load, and may not be the same on your system. Please also note that those benchmarks will not be always kept up to date, but I will try to update the numbers as I make Microfetch faster.
| Command | Mean [µs] | Min [µs] | Max [µs] | Relative | Written by raf? |
|---|---|---|---|---|---|
microfetch |
604.4 ± 64.2 | 516.0 | 1184.6 | 1.00 | Yes |
fastfetch |
140836.6 ± 1258.6 | 139204.7 | 143299.4 | 233.00 ± 24.82 | No |
pfetch |
177036.6 ± 1614.3 | 174199.3 | 180830.2 | 292.89 ± 31.20 | No |
neofetch |
406309.9 ± 1810.0 | 402757.3 | 409526.3 | 672.20 ± 71.40 | No |
nitch |
127743.7 ± 1391.7 | 123933.5 | 130451.2 | 211.34 ± 22.55 | No |
macchina |
13603.7 ± 339.7 | 12642.9 | 14701.4 | 22.51 ± 2.45 | No |
The point stands that Microfetch is significantly faster than every other fetch tool I have tried. This is to be expected, of course, since Microfetch is designed explicitly for speed and makes some tradeoffs to achieve it's signature speed.
To benchmark individual functions, Criterion.rs is used. See Criterion's
Getting Started Guide for details or just run cargo bench to benchmark all
features of Microfetch.
Microfetch uses Hotpath for profiling function execution timing and heap allocations. This helps identify performance bottlenecks and track optimization progress. It is so effective that thanks to Hotpath, Microfetch has seen a 60% reduction in the number of allocations.
To profile timing:
HOTPATH_JSON=true cargo run --features=hotpathTo profile allocations:
HOTPATH_JSON=true cargo run --features=hotpath,hotpath-alloc-count-totalThe JSON output can be analyzed with the hotpath CLI tool for detailed
performance metrics. On pull requests, GitHub Actions automatically profiles
both timing and allocations, posting comparison comments to help catch
performance regressions.
Note
You will need a Nerdfonts patched font installed, and for your terminal emulator to support said font. Microfetch uses nerdfonts glyphs by default, but this can be changed by patching the program.
Microfetch is packaged in nixpkgs. It can be
installed by adding pkgs.microfetch to your environment.systemPackages.
Additionally, you can try out Microfetch in a Nix shell.
nix shell nixpkgs#microfetchOr run it directly with nix run
nix run nixpkgs#microfetchNon-Nix users will have to build Microfetch with cargo. It is not published
anywhere but I imagine you can use cargo install --git to install it from
source.
cargo install --git https://github.com/notashelf/microfetch.gitMicrofetch is currently not available anywhere else. Though, does it really have to be?
You can't.
Customization, of any kind, is expensive: I could try reading environment variables, parse command-line arguments or read a configuration file but all of those increment execution time and resource consumption by a lot.
To be fair, you can customize Microfetch by, well, patching it. It's not the best way per se, but it will be the only way that does not compromise on speed.
The Nix package allows passing patches in a streamlined manner by passing
.overrideAttrs to the derivation.
I will, mostly, reject feature additions. This is not to say you should avoid them altogether, as you might have a really good idea worth discussing but as a general rule of thumb consider talking to me before creating a feature PR.
Contributions that help improve performance in specific areas of Microfetch are welcome. Though, prepare to be bombarded with questions if your changes are large.
A Nix flake is provided. nix develop to get started. Direnv users may simply
run direnv allow to get started.
Non-nix users will need cargo and gcc installed on their system, see
Cargo.toml for available release profiles.
Huge thanks to everyone who took the time to make pull requests or nag me in person about current issues. To list a few, special thanks to:
- @Nydragon - For packaging Microfetch in Nixpkgs
- @ErrorNoInternet - Performance improvements and code assistance
- @SoraTenshi - General tips and code improvements
- @bloxx12 - Performance improvements and benchmarking plots
- @sioodmy - Being cute
- @mewoocat - The awesome NixOS logo ASCII used in Microfetch
Additionally a big thank you to everyone who used, talked about or criticized Microfetch. I might have missed your name here, but you have my thanks.
Microfetch is licensed under GPL3. See the license file for details.
Footnotes
-
Okay, maybe a little bit. One of the future goals of Microfetch is to defer to inline Assembly for the costliest functions, but that's for a future date and until I do that I can pretend to be sane. ↩
