vain @www.uninformativ.de

Follow

Block / Report User

If this user/feed is violating this Pod's (yarn.meff.me) community guidelines as set out in the Abuse Policy, please report them immediately!

You are also free to Unfollow or Mute this user or feed. Muting will also remove that user/feed's content from your view and you will no longer see content from that user/feed anywhere.

@vain does not follow you (they may not see your replies!)

Recent Twts

Recent twts from vain

(#fvvdrjq) @lyse@lyse.isobeef.org @dce@hashnix.club It’s pretty cool, I won’t argue that, but also really simple, to be completely honest. 😅 The BIOS already provides all you need to send data to the printer:

https://helppc.netcore2k.net/interrupt/bios-printer-services

The BIOS actually does provide a great deal of things, which, to me, was one of the most surprising learnings of this project (the project of writing a little 16-bit real-mode OS, that is). It often doesn’t feel like I was writing an operating system – it felt more like writing a normal program that just uses BIOS calls like we would use syscalls these days.

(I’ve also read a lot of warnings, like “don’t use the BIOS for this or that”. Mostly because it tends to be very slow.)


#ltqzz7a

(#hvbpp3q) Right, now that I’m reading some comments: I was initially assuming that they would actually make it impossible for distros to provide a 32-bit build (intentionally or unintentionally). But maybe that’s not the case and distros can just continue to ship a 32-bit Firefox …


#xbm6lua

Now that’s interesting. Some of these bots start crawling at URLs like this:

https://uninformativ.de/projects/lariza/NetTracer-Scenes/GPUTracer/multipass/xlonitor/http-collect/getpw

That is obviously completely wrong. But I can explain it. Some years ago, I screwed up my nginx rewrite rules, and that’s how these broken URLs came to be.

It all redirects to /git now, which is why that endpoint sees so much traffic lately.

But what does that mean? Why do they start there? I can only speculate that this company bought an old database of web links and they use that to start crawling. And it was probably a cheap one, because these redirects have been fixed for quite a long time now.


#xukjpvq

(#nbo4v7q) @prologic@twtxt.net I’m doing that now as well, but I don’t think this is a good solution. This is going to hurt “self-hosting” in the long run: I cannot afford true self-hosting where I actually do host everything here at home – instead, I must use a cloud provider / VPS for that. It is only a matter of time until my provider starts doing AI shit as well (or rather, the customers do it) and then what? I get blocked, e.g. I can’t send email to (some) people anymore. This is already bad and it’s going to get worse.


#qomj33a

(#sux32qq) “But all your stuff is MIT licensed! They are allowed to do that!”

Haha. As if they would care. They crawl everything they get their hands on.

Besides, that’s not true, the license states that the copyright notice must be retained. “AI” breaks that. They incorporate my code and my articles in their product and make it appear as if it was their work.


#imftnja

(#sux32qq) Why do I care about this?

  1. The load will become a problem at some point.
  2. These crawlers and the current “AI” in general are breaking the rules. I am supposed to be paying for every little thing, I get sued for “piracy”. But apparently, these rules only apply to me. If I had more money, I could break them. Fuck that.
  3. I simply don’t want it. Period.

#2s2wjga

(#sux32qq) This probably means that I can no longer host my own website. I don’t want to deploy something like Anubis, because that ruins the whole thing: I want it to be accessible from ancient browsers, like OS/2 or Windows 3.11.

I’ll keep an eye on it for a while. Maybe try to block some IPs.

Sooner or later, I’ll take the website down and shift everything to Gopher.


#qiq5bnq

The bots have begun to access my website way more often. I’m getting about 120k hits on https://www.uninformativ.de/git/ now in a couple of hours.

They don’t cache anything, probably on purpose.

It comes in waves. I get about 100 hits (all at once) on that /git endpoint, all from different IPs. Then it takes a moment until I get another wave of about 500-1000 requests (all at once) where they do HEAD requests on some of the paths below /git. I assume they did a GET earlier and are now checking if something has changed.


#sux32qq