Skip to content

Conversation

@Lolle2000la
Copy link

@Lolle2000la Lolle2000la commented Sep 7, 2025

These speeds up the backup restoration process by parallelizing the process and better batching the changes to individual entries. On my libary it cuts down the import in an emulator from around 20 minutes to 3:20 and on my S24 Ultra from around 50-60 minutes to 5-9. These operations are mostly IO-bound, so further improvements would require some reworking of the database code, but I wanted to remain as close to the original code as possible.

Some other things are affected by this, for example, the notifications get updated not after every single restored entry but only after a bunch of them (dependent on the number of threads in the pool) since that reduces lock contention. Since this code also now uses it's own thread pool for the process, running a restore will no longer cause the entire app to suffer from thread starvation as operations like search and loading a manga wait for a thread to become available (it doesn't fix it completely though, especially on downstream forks with sync-functionality, but I digress).

AI disclaimer

I made heavy use of AI, especially in the beginning of development, to understand the restoration process and identify bottlenecks and opportunities for improvements. I had the AI implement these changes until it compiled and then tested it against the existing implementation. When all issues were fixed, I started walking back the changes, reworking potential bad decisions by the AI and restoring old behavior that were arbitrarily changed by the LLM.

After making sense of it, checking it and making sure that the code is as undisruptive as possible, I tested it again against backups from my library at various times, seeing if there were any diffences, both from a clean restore (libary empty) and a restore upon an existing libary (with same contents or slight differences). I thus confirmed that the results were same there as they were on main.

I am including this disclaimer in light of the recent waves of "AI slop" that plague open source communities to assure that this is both in part authored and thoroughly reviewed and tested by me (though I might have missed stuff of course) and hallucinated contribution farming for my CV or whyever people do that.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant