I now subscribed to most feeds in my Go tt
reimplementation that I already followed with the old Python tt
. Previously, I just had a few feeds for testing purposes in my new config. While transfering, I âdroppedâ heaps of feeds that appeared to be inactive.
This might motivate me to actually âfinishâ the new client, so that it could become my daily driver. No need to use the old software stack any longer. Letâs see how bad this goes.
#zcabhya
(#zcabhya) If I didnât mess this up, 61 feeds reduced down to 36.
#fbf2xjq
(#zcabhya) @lyse@lyse.isobeef.org Iâm glad to hear that! Yay for more clients. đ
#gh67oua
(#zcabhya) Thanks, @movq@www.uninformativ.de!
My backing SQLite database with indices is 8.7 MiB in size right now.
The twtxt
cache is 7.6 MiB, it uses Pythonâs pickle
module. And next to it there is a 16.0 MiB second database with all the read statuses for the old tt
. Wow, super inefficient, it shouldnât contain anything else, itâs a giant, pickled {"$hash": {"read": True/False}, âŚ}
. What the heck, why is it so big?! O_o
#pyscdeq
(#zcabhya) neat! my watcher is currently sitting at about 75 MB following over 1500 feeds. only about 200 are currently somewhat active.
-rw-r--r--. 1 xuu xuu 69M Mar 25 20:46 twt.db
-rw-r--r--. 1 xuu xuu 32K Mar 25 21:34 twt.db-shm
-rw-r--r--. 1 xuu xuu 5.6M Mar 25 21:34 twt.db-wal
sqlite> select state, count(*) n from feeds group by 1;
hot|7
warm|8
cold|183
frozen|743
permanantly-dead|857
#7cxkfza
(#zcabhya) I need to import my yarn cache. Itâs sitting at about 1.5G in registry format. That should make things interestingâŚ
#cq2ta3a