Kevin Fenzi: infra weekly recap: early december 2025

Scrye into the crystal ball

hey everyone, it's saturday so time for another recap of adventures in fedora infrastructure and other fedora areas.

scrapers

I started a discussion thread about the current scrapers we are dealing with. To summarize, anubis has cut out a bunch of them and really helped out quite a lot. It has caused some issues with clients as well, but we have been working thought those as we hear about them. The remaining scrapers are large botnets of browsers, probibly running on end user machines. Those are more troublesome to deal with.

The discussion thread is at: https://discussion.fedoraproject.org/t/scrapers-and-ideas-for-how-to-deal-with-them/175760 if anyone would like to read or contribute.

We had another run in with them eariler this morning. A great way to spend saturday morning, but I did look more carefully this time. The main cause of issues was them hitting src.fedoraproject.org and it's /history/ and /blame/ endpoints. This was causing the backend to have to do a somewhat expensive git blame/history call to the local repos and since it took a bit to come back requests piled up and latency went way up. I have for now blocked those endpoints in the src.fedoraproject.org web interface. This brought everything back to normal. If you need to do those things, you can easily clone the git repo locally and do them.

rdu2-cc to rdu3 datacenter move

This last week, I moved pagure.io (virtually) to the new datacenter. Unfortunately it didn't go as smoothly as I had hoped. All the data synced over in about 15minutes or so, but then I tried to test it before switching it live and it just wasn't allowing me to authenticate on git pushes. Finally the light bulb went on and I realized that pagure was checking for auth, but it wasn't 'pagure.io' yet because I hadn't updated dns. ;( It's always DNS. :) After that everything went fine. There were a few loose

I had to fix up the next day: mirroring out was not working because we didn't have ssh outgoing listed as allowed. Uploading releases wasn't working due to a selinux labeling issue, and finally our s390x builders couldn't reach it because I forgot they needed to do that. Hopefully pagure.io is all happy now and I even gave it more resources in the new dc.

Monday the actual physical move happens. See: https://pagure.io/fedora-infrastructure/issue/12955 for more details. Mostly, folks shouldn't notice these machines moving. abrt submissions will be down, and download-cc-rdu01 will be down, but otherwise it should be a big nothing burger for most folks. Machines will move monday and we will work tuesday to reinstall/reconfigure things and bring it all back up.

Matrix outage on dec 10th

There is going to be a short outage of our fedora.im and fedoraproject.org matrix servers. We are migrating to the new MAS setup (Matrix Authentication Server). This will allow clients to use things like element-x and also is a important step we wanted to complete before moving forward on deploying our own matrix servers.

forge migration

A number of groups have already moved over to forge.fedoraproject.org from pagure.io. I really was hoping to move infrastructre, but haven't had the cycles yet. We do have the orgs created now and are planning on moving our docs over very soon. I don't know if we will move tickets before the end of the year or not, but we will see.

December of docs

So, I committed myself to doing a docs pr/issue/something every day in December, and so far I am doing so! 6 days and 6 PR's and more tickets updated. Hopefully I can keep it up.

Stay Informed

Get the best articles every day for FREE. Cancel anytime.