Switching to KDE

I’m in the midst of switching from Gnome to KDE/Plasma. I’m doing this because KDEnlive crashes a lot less under KDE, and the every-3-minutes crashes were making editing videos amazingly painful.

I’m actually really liking it. The biggest problem right now (less than 24 hours in) is muscle memory making unexpected things happen.

One of the things I liked most about MacOS was that I had different applications on different virtual desktops, and I had my fingers trained so that if I wanted, say, to go to email, that was on desktop 2, and alt-2 took me there. This was never possible (or, at least, easy) on Gnome. But it’s easy on KDE, and I’m rapidly getting back into that habit, even though it’s been roughly 5 years since I’ve used a Mac.

There are, of course, small irritations, having more to do with what I’m used to than whether they are “good” or “bad”. But I think, over all, in addition to improving how long it takes to edit video, this will be a net win for productivity.

We’ll see.

Student supercomputing at ISC18

This is the second year that I’ve attended isc-hpc in Frankfurt. One of the highlights of this event is the student cluster competition. This year there are 12 teams competing for the fastest and most energy efficient student cluster.

This year I interviewed about half of the teams. It’s hard to find a time when they’re not actively working on projects, and one doesn’t want to interrupt.

There are several parts to the contest. In addition to solving some simulation problems that they’re presented with months in advance, there is always a secret application that is provided when they arrived on site. They have to program and optimise the solution, and then try to run it faster than any of the other teams. And, they have to do all that without exceeding 3 kilowatts of power usage.

This year there are teams from Africa, Asia, South America, and Europe.

The awards ceremony is this afternoon, when will find out which one of these teams of young student geniuses will prevail.

Feathercast at FOSS Backstage

This week I am at FOSS Backstage, in Berlin. It’s a conference about what goes on behind the scenes in open source – issues of governance, licences and other legal stuff, community management, mentoring, and so on. Once a project gets beyond a few people, these issues start coming up.

As was observed in this morning’s keynote, this doesn’t feel like a first-time conference, but rather like something that’s been running smoothly for a while. I’m very impressed with the event.

I have been doing interviews for Feathercast this week, and have, so far, 13 interviews captured. So this is going to take a time to edit and publish. They’ll be on the Apache Software Foundation YouTube channel, as well as on the Feathercast site.

I’ve been talking to speakers from the event, and, since all of the sessions are being videoed, I’m trying not to simply reproduce their talk, but get some information about the project, organization, or concept that they were presenting. I hope you like what I’ve done.

Follow @FeatherCast on Twitter to find out when the episodes are published.

Upcoming events (June and beyond)

I’m about to head out for a few events again, and I’m in the process of planning several other events.

First, I’ll be in Berlin for FOSS Backstage , Berlin Buzzwords , and the Apache EU RoadShow. This is a trifecta of open source events happening at the Kulturbrauerei in Berlin. I’ll be speaking at Backstage about mentoring in open source, which, you might know, is something I’m passionate about. I’ll also be doing interviews for Feathercast, so if you’re going to be there, find me and do an interview.

I’ll be home for a week, and then I’ll be attending the ISC-HPC Supercomputing event in Frankfurt. This is the second time I’ll attend this event, which was my introduction to Supercomputing last year. I’ve learned so much since then, but I’m still an HPC newbie. While there, I hope to spend most of my time speaking with the EDUs and research orgs that are present, and doing interviews with the student supercomputing teams that are participating in the Student Cluster Competition.

Beyond that, I’m planning several events, where I’ll be representing CentOS.

In August, I’ll be attending DevConf.us in Boston, and on the day before DevConf, we’ll be running a CentOS Dojo at Boston University. The call for papers for that event is now open, so if you’re doing anything interesting around CentOS, please submit a paper and come hang out with us.

Later in August, I will (maybe? probably?) be going to Vancouver for Open Source Summit North America (formerly Linuxcon) to represent CentOS.

In September, I’ll be at ApacheCon North America in Montreal. The schedule for this event is published, and registration is open. You should really come. ApacheCon is something I’ve been involved with for 20 years now, and I’d love to share it with you.

October is going to be very full.

CentOS is proudly sponsoring Ohio LinuxFest, which apparently I last attended in 2011! (That can’t be right, but that’s the last one I have photographic evidence for.) We (CentOS) will be sharing our booth/table space with Fedora, and possibly with some of the project that use the CentOS CI infrastructure for their development process. More details as we get closer to the event. That’s October 12th – 13th in Columbus.

Then, on October 19th, we’ll be at CERN, in Meyrin, Switzerland, for the second annual Cern CentOS Dojo. Details, and the call for papers, for that event, are on the event website at http://cern.ch/centos.

Immediately after that, I’ll be going (maybe? probably?) to Edinburgh for Open Source Summit Europe. This event was in Edinburgh a few years ago, and it was a great location.

Finally, in November, I plan to attend SuperComputing 18 in Dallas, which is the North American version of the HPC event in Frankfurt, although it tends to be MUCH bigger. Last year, at the event in Denver, I walked just over 4 miles one day on the show floor, visiting the various organizations presenting there.

So, that’s it for me, for the rest of the year, as far as I know. I would love to see you if you’ll be at, or near, any of these venues.

Web server performance problem solved, years later

(Geeky post alert. If you’re reading this on Facebook, the links and formatting are going to be all messed up.)

15 years ago, I wrote a blog post about a stereo cabinet glass door that spontaneously exploded. For some reason, this post attracted a lot of attention. If I had written it a few years later, one would say it “went viral.” It received tens of thousands of page views, and 330 comments.

At some point, I decided to export it to a static page, since every page load was causing my server – at the time, running on a Pentium in my home office across my DSL line – to slow down horribly. In the process, I managed to delete the page entirely (a long story within a long story) and I grabbed the page off of the Wayback Machine.

That page is HERE, by the way.

Each comment has a Gravatar logo next to it, which, due to the way curl (the tool I used to retrieve the static copy) works, has a name like avatar(230).php but is actually a jpeg file. That means that every time the page loads, it makes 330 calls to the php engine, which errors out because the file in question isn’t a php file, but is an actual on-disk jpeg file. Like this one, for example.

Then, several years ago, I switched from using mod_php to using php_fpm, which does the same thing, except more efficiently.

Finally, at some point, I added a mod_security ruleset that attempted to detect when people were DDoSing my site – the barrier it set was more than 30 requests in under a second.

These various things, all combined, resulted in a situation where whenever someone attempted to view that page, it would cause my server to crawl to a halt, and the visitor to be added to my mod_security deny list. This was not desired behavior.

Of course, this is all in retrospect. All I knew was that several times a day, I’d get failure notices from my server monitoring, and by the time I got there to see what was happening, the problem had cleared up. So, no big deal, right.

This has been going on for years.

Today, looking at error logs trying to figure out what was happening, I suddenly put all of the pieces together, and fixed the problem, in less time than it has taken me to write this blog post. The solution has a few parts.

First, we exclude anything in the /files/ directory from being processed by php:

# (old line) ProxyPassMatch ^/(.*\.php(/.*)?)$ fcgi://127.0.0.1:9000/var/www/vhosts/drbacchus/$1
ProxyPassMatch ^/(?!files)(.*\.php(/.*)?)$ fcgi://127.0.0.1:9000/var/www/vhosts/drbacchus/$1

That adds the (?!files) negative lookahead, which says “only do this if it DOESN’T match ‘files’

Next, we turn off the mod_security rule specifically for these requests:

<LocationMatch (Exploding)>
SecRuleEngine off
</LocationMatch>

Which says, don’t run the SecRuleEngine for requests that contain ‘Exploding’, which is in the URL of the static copy of the blog post.

Finally, I have to tell httpd that the .php files in the static copy are, in fact, jpeg files:

<Directory /var/www/vhosts/drbacchus/files>
AddType image/jpg .php
</Directory>

This has the added benefit that if anybody dropped a .php file in my files directory, it would be defanged, so to speak, and wouldn’t execute.

 

300 TFTC

I just found my 300th geocache! I started Geocaching in March of 2003. It was a difficult time and I needed a reason to get out of the house and do something other than sit and stare at the walls. And so I started geocaching. I met a lot of good friends while geocaching, although I’m not in touch with very many of them anymore. Today I’m up in New Jersey for a wedding and took the opportunity to go out and get the last two to push me to the 300 line. Thanks for the cache.

Thermodynamics

I asserted to my daughter last week that a paper cup with water in it will not catch fire if placed directly in a fire. So, of course, we had to try it.

I was a little nervous, but it turns out this is completely true. The cup burned down to the water line, and then didn’t burn until the water had completely boiled off. The *instant* the last of the water boiled off, the cup burst into flames and was gone almost immediately. (Animated version of image is here.)

Why? Well, it’s because water boils at 212°F (100°C) and paper combusts at 451°F (843.8°C) so as long as there is water in the cup, the heat of the cup is being convected away into the water to heat it towards boiling and the cup remains too cold to ignite. Once the water starts boiling, the cup is full of steam, which is quickly carrying away the heat. The moment the water has all evaporated, though, the cup is abruptly at combustion temperature and goes up in a flash.

You should try it. It’s a great way to impress your kids. Or win a bet.

Blogging, and feedreaders

A week or two ago I had a conversation with Stormy about the lost art of blogging, and blog reading. Long ago, Google Reader was a daily routine, and kept me in touch with the blogs that I wanted to read, and made me more likely to write blogs of my own. When Google Reader died, nothing really took its place, and the thing that kinda sorta took its place – Facebook and Twitter – do a terrible, terrible job of giving me the sources I actually want, and, instead, feed me a steady diet of pablum and clickbait.

Yesterday, Anil Dash tweeted about Google Reader, and made some great observations about what an important tool it was for a certain population.

The entire Twitter thread is worth reading … and would have made a good blog post.

These two things have inspired me to try Feedly again. It is much better than the last time I tried to use it, and I have high hopes that I’ll actually stick with it this time, and make it part of my daily routine again. I hope. I also hope that this will result in my actually writing again, like I used to do, on a nearly daily basis.

Pi-Hole

In honor of Pi Day, I built and deployed a Pi-Hole server.

Pi Hole is software that acts as a caching DNS server and ad-blocker, by black-holing known advertising sources at the DNS layer.

You can obtain Pi Hole at https://pi-hole.net/

As the name suggests, it is optimized to run on a Raspberry Pi. I’m running it on a Pi B that was otherwise unoccupied.

It’s been running for a couple of days now, and tells me that it is stopping around 25% of traffic. And because it stops the traffic before the browser even connects to the server, that means that it is making my network faster as a result.

It took me very little time to get running, following the instructions on the website. Indeed, the longest part of the entire process was the initial Raspberry Pi operating system installation. The actual Pi Hole installation took maybe 10 minutes.

So far there has been no negative impact that I’ve noticed – no false positives, no pages I couldn’t get to that I wanted.

Recommended. Give it a try if you have a Raspberry Pi that has been sitting around since Christmas and you’re not sure what to do with it.

SnowpenStack

I’m heading home from SnowpenStack and it was quite a ride. As Theirry said in our interview at the end of Friday (coming soon to a YouTube channel near you), rather than spoiling things, the freak storm and subsequent closure of the event venue served to create a shared experience and camaraderie that made it even better.

In the end I believe I got 29 interviews, and I’ll hopefully be supplementing this with a dozen online interviews in the coming weeks.

If you missed your interview, or weren’t at the PTG, please contact me and we’ll set something up. And I’ll be in touch with all of the PTLs who were not already represented in one of my interviews.

A huge thank you to everyone that made time to do an interview, and to Erin and Kendall for making everything onsite go so smoothly.

The Margin Is Too Narrow