Tag Archives: geek

Switching to KDE

I’m in the midst of switching from Gnome to KDE/Plasma. I’m doing this because KDEnlive crashes a lot less under KDE, and the every-3-minutes crashes were making editing videos amazingly painful.

I’m actually really liking it. The biggest problem right now (less than 24 hours in) is muscle memory making unexpected things happen.

One of the things I liked most about MacOS was that I had different applications on different virtual desktops, and I had my fingers trained so that if I wanted, say, to go to email, that was on desktop 2, and alt-2 took me there. This was never possible (or, at least, easy) on Gnome. But it’s easy on KDE, and I’m rapidly getting back into that habit, even though it’s been roughly 5 years since I’ve used a Mac.

There are, of course, small irritations, having more to do with what I’m used to than whether they are “good” or “bad”. But I think, over all, in addition to improving how long it takes to edit video, this will be a net win for productivity.

We’ll see.

Upcoming events (June and beyond)

I’m about to head out for a few events again, and I’m in the process of planning several other events.

First, I’ll be in Berlin for FOSS Backstage , Berlin Buzzwords , and the Apache EU RoadShow. This is a trifecta of open source events happening at the Kulturbrauerei in Berlin. I’ll be speaking at Backstage about mentoring in open source, which, you might know, is something I’m passionate about. I’ll also be doing interviews for Feathercast, so if you’re going to be there, find me and do an interview.

I’ll be home for a week, and then I’ll be attending the ISC-HPC Supercomputing event in Frankfurt. This is the second time I’ll attend this event, which was my introduction to Supercomputing last year. I’ve learned so much since then, but I’m still an HPC newbie. While there, I hope to spend most of my time speaking with the EDUs and research orgs that are present, and doing interviews with the student supercomputing teams that are participating in the Student Cluster Competition.

Beyond that, I’m planning several events, where I’ll be representing CentOS.

In August, I’ll be attending DevConf.us in Boston, and on the day before DevConf, we’ll be running a CentOS Dojo at Boston University. The call for papers for that event is now open, so if you’re doing anything interesting around CentOS, please submit a paper and come hang out with us.

Later in August, I will (maybe? probably?) be going to Vancouver for Open Source Summit North America (formerly Linuxcon) to represent CentOS.

In September, I’ll be at ApacheCon North America in Montreal. The schedule for this event is published, and registration is open. You should really come. ApacheCon is something I’ve been involved with for 20 years now, and I’d love to share it with you.

October is going to be very full.

CentOS is proudly sponsoring Ohio LinuxFest, which apparently I last attended in 2011! (That can’t be right, but that’s the last one I have photographic evidence for.) We (CentOS) will be sharing our booth/table space with Fedora, and possibly with some of the project that use the CentOS CI infrastructure for their development process. More details as we get closer to the event. That’s October 12th – 13th in Columbus.

Then, on October 19th, we’ll be at CERN, in Meyrin, Switzerland, for the second annual Cern CentOS Dojo. Details, and the call for papers, for that event, are on the event website at http://cern.ch/centos.

Immediately after that, I’ll be going (maybe? probably?) to Edinburgh for Open Source Summit Europe. This event was in Edinburgh a few years ago, and it was a great location.

Finally, in November, I plan to attend SuperComputing 18 in Dallas, which is the North American version of the HPC event in Frankfurt, although it tends to be MUCH bigger. Last year, at the event in Denver, I walked just over 4 miles one day on the show floor, visiting the various organizations presenting there.

So, that’s it for me, for the rest of the year, as far as I know. I would love to see you if you’ll be at, or near, any of these venues.

CERN Centos Dojo, event report: 2 of 4 – CERN tours

(This post is the second in a series of four. They are gathered here.)

The second half of Thursday was where we got to geek out and tour various parts of CERN.

I was a physics minor in college, many years ago, and had studied not just CERN, but many of the actual pieces of equipment we got to tour, so this was a great privilege.

We started by touring the data center where the data from all of the various physics experiments is crunched into useful information and discoveries. This was amazing for a number of reasons.

From the professional side, CERN is the largest installation of RDO – the project I work with at work – that we know of. 279 thousand cores running RDO OpenStack.

For those not part of my geek world, that translates into hundreds of thousands of physical computers, arranged in racks, crunching data to unlock the secrets of the universe.

For those that are part of my geek world, you can understand why this was an exciting thing to see in person and walk through.

The full photo album is here, but I want to particularly show a couple of shots:

Visiting CERN

Here we have several members of the RDO and CentOS team standing in front of some of the systems that run RDO.

Visiting CERN

And here we have a photo that only a geek can love – this is the actual computer on which the very first website ran. Yes, boys and girls, that’s Tim Berners-Lee’s desktop computer from the very first days of the World Wide Web. It’s ok to be jealous.

There will also be some video over on my YouTube channel, but I haven’t yet had an opportunity to edit and post that stuff.

Next, we visited the exhibit about the Superconducting Super Collider, also known as the Large Hadron Collider. This was stuff that I studied in college, and have geeked out about for the years since then.

There are pictures from this in the larger album, but I want to point out one particular picture of something that absolutely blew my mind.

Most of the experiments in the LHC involve accelerating sub-atomic particles (mostly protons) to very high speeds – very close to the speed of light – and then crashing them into something. When this happens, bits of it fly off in random directions, and the equipment has to detect those bits and learn things about them – their mass, speed, momentum, and so on.

In the early days, one of the the ways that they did this was to build a large chamber and string very fine wires across it, so that when the particles hit those wires it would cause electrical impulses.

Those electrical impulses were captured by these:

CERN visit

Those are individual circuit boards. THOUSANDS of them, each individually hand-soldered. Those are individual resistors, capacitors, and ICs, individually soldered to boards. The amount of work involved – the dedication, time, and attention to detail – is simply staggering. This photo is perhaps 1/1000th of the total number of boards. If you’ve done any hand-soldering or electronic projects, you’ll have a small sense of the scale of this thing. I was absolutely staggered by this device.

Outside on the lawn were several pieces of gigantic equipment that were used in the very early days of particle physics, and this was like having the pages of my college text book there in front of me. I think my colleagues thought I’d lost my mind a little.

College was a long time ago, and most of the stuff I learned has gone away, but I still have the sense of awe of it all. That an idea (let’s smash protons together!) resulted in this stuff – and more than 10,000 people working in one place to make it happen, is really a testament to the power of the human mind. I know some of my colleagues were bored by it all, but I am still reeling a little from being there, and seeing and touching these things. I am so grateful to Tim Bell and Thomas Oulevey for making this astonishing opportunity available to me.

Finally, we visited the ATLAS experiment, where they have turned the control room into a fish tank where you can watch the scientists at work.

CERN visit

What struck me particularly here was that most of the people in the room were so young. I hope they have a sense of the amazing opportunity that they have here. I expect that a lot of these kids will go on to change the world in ways that we haven’t even thought of yet. I am immensely jealous of them.

So, that was the geek chapter of our visit. Please read the rest of the series for the whole story.

Software Morghulis

In George R R Martin’s books “A Song of Fire and Ice” (which you may know by the name “A Game of Thrones”), the people of Braavos,
have a saying – “Valar Morghulis” – which means “All men must die.” As you follow the story, you quickly realize that this statement is not made in a morbid, or defeatist sense, but reflects on what we must do while alive so that the death, while inevitable, isn’t meaningless. Thus, the traditional response is “Valar Dohaeris” – all men must serve – to give meaning to their life.

So it is with software. All software must die. And this should be viewed as a natural part of the life cycle of software development, not as a blight, or something to be embarrassed about.

Software is about solving problems – whether that problem is calculating launch trajectories, optimizing your financial investments, or entertaining your kids. And problems evolve over time. In the short term, this leads to the evolution of the software solving them. Eventually, however, it may lead to the death of the software. It’s important what you choose to do next.

You win, or you die

One of the often-cited advantages of open source is that anybody can pick up a project and carry it forward, even if the original developers have given up on it. While this is, of course, true, the reality is more complicated.

As we say at the Apache Software Foundation, “Community > Code”. Which is to say, software is more than just lines of source code in a text file. It’s a community of users, and a community of developers. It’s documentation, tutorial videos, and local meetups. It’s conferences, business deals and interpersonal relationships. And it’s real people solving real-world problems, while trying to beat deadlines and get home to their families.

So, yes, you can pick up the source code, and you can make your changes and solve your own problems – scratch your itch, as the saying goes. But a software project, as a whole, cannot necessarily be kept on life support just because someone publishes the code publicly. One must also plan for the support of the ecosystem that grows up around any successful software project.

Eric Raymond just recently released the source code for the 1970s
computer game Colossal Cave Adventure on Github. This is cool, for us greybeard geeks, and also for computer historians. It remains to be seen whether the software actually becomes an active open source project, or if it has merely moved to its final resting place.

The problem that the software solved – people want to be entertained – still exists, but that problem has greatly evolved over the years, as new and different games have emerged, and our expectations of computer games have radically changed. The software itself is still an enjoyable game, and has a huge nostalgia factor for those of us who played it on greenscreens all those years ago. But it doesn’t measure up to the alternatives that are now available.

Software Morghulis. Not because it’s awful, but because its time has
passed.

Winter is coming

The words of the house of Stark in “A Song of Fire and Ice”, are “Winter is coming.” As with “Valar Morghulis,” this is about planning ahead for the inevitable, and not being caught surprised and unprepared.

How we plan for our own death, with insurance, wills, and data backups, isn’t morbid or defeatist. Rather, it is looking out for those that will survive us. We try to ensure continuity of those things which are possible, and closure for those things which are not.

Similarly, Planning ahead for the inevitable death of a project isn’t defeatist. Rather, it shows concern for the community. When a software project winds down, there will often be a number of people who will continue to use it. This may be because they have built a business around it. It may be because it perfectly solves their particular problem. And it may be that they simply can’t afford the time, or cost, of migrating to something else.

How we plan for the death of the project prioritizes the needs of this community, rather than focusing merely on the fact that we, the developers, are no longer interested in working on it, and have moved on to something else.

At Apache, we have established the Attic as a place for software projects to come to rest once the developer community has dwindled. While the project itself may reach a point where they can no longer adequately shepherd the project, the Foundation as a whole still has a responsibility to the users, companies, and customers, who rely on the software itself.

The Apache Attic provides a place for the code, downloadable releases, documentation, and archived mailing lists, for projects that are no longer actively developed.

In some cases, these projects are picked up and rejuvenated by a new community of developers and users. However, this is uncommon, since there’s usually a very good reason that a project has ceased operation. In many cases, it’s because a newer, better solution has been developed for the problem that the project solved. And in many cases, it’s because, with the evolution of technology, the problem is no longer important to a large enough audience.

However, if you do rely on a particular piece of software, you can rely on it always being available there.

The Attic does not provide ongoing bug fixes or make additional releases. Nor does it make any attempt to restart communities. It is
merely there, like your grandmother’s attic, to provide long-term storage. And, occasionally, you’ll find something useful and reusable as you’re looking through what’s in there.

Software Dohaeris

The Apache Software Foundation exists to provide software for the public good. That’s our stated mission. And so we must always be looking out for that public good. One critical aspect of that is ensuring that software projects are able to provide adequate oversight, and continuing support.

One measure of this is that there are always (at least) three members of the Project Management Committee (PMC) who can review commits, approve releases, and ensure timely security fixes. And when that’s no longer the case, we must take action, so that the community depending on the code has clear and correct expectations of what they’re downloading.

In the end, software is a tool to accomplish a task. All software must serve. When it no longer serves, it must die.

Event report: ApacheCon North America, 2017, Miami

Event Report, ApacheCon North America 2017

May 15-19, 2017

(This is an abridged version of the report I sent to my manager.)

Last week I attended ApacheCon North America in Miami. I am the conference chair of ApacheCon, and have been for on and off for  about 15 years. Red Hat has been a sponsor of ApacheCon almost every single time since we started doing it 17 years ago. In addition to being deeply involved in specific projects, such as Tomcat, ActiveMQ, and Mesos, we are tangentially involved in many of the other projects at the Apache Software Foundation.

Presentations from ApacheCon may be found at https://www.youtube.com/playlist?list=PLbzoR-pLrL6pLDCyPxByWQwYTL-JrF5Rp (Yes, that’s the Linux Foundation’s YouTube channel – this ApacheCon was produced by the LF events team.)

I’d like to draw specific attention to Alan Gates, at Hortonworks, who has developed a course to train people at the company in how to work with upstream projects. He did this because, as the company expanded from a small group of founders who deeply understood open source, to thousands of employees who kinda sorta got it, but not always.


Also of great interest was the keynote by Sandra Matz about what your social media profile tells the world about you. It’s worth watching all the way to the end, as she doesn’t just talk about the reasons to be terrified of the interwebs, but also about how this kind of analysis can actually be used for good rather than evil.

Event report: Red Hat Summit, OpenStack Summit

Event report: Red Hat Summit, OpenStack Summit

May 1-5, 2017 and May 8-11, 2017

During the first two weeks of May, I attended Red Hat Summit, followed by OpenStack Summit. Since both events were in Boston (although not at the same venue), many aspects of them have run together.

Mini-cluster

On the first day of Red Hat Summit, I received the mini-cluster, which had been built in Brno for the April Brno open house. There were one or two steps missing from the setup instructions, so with a great deal of help from Hugh Brock, it too most of the first day to get the cluster running. We’ll be publishing more details about the mini-cluster on the RDO blog in the next week or two. However, most of the problems were 1) it was physically connected incorrectly (ie, my fault) and 2) there were some routing table changes that were apparently not saved after initial setup.

Once the cluster was up, we connected to the ManageIQ cluster on the other side of our booth, and they were able to manage our OpenStack deployment. Thus, we were able to demonstrate the two projects working together.

In future events, we’d like to bring more projects into this arrangement – say, use Ceph for storage, or have ManageIQ managing OpenStack and oVirt, for example.

After we got the cluster working, in subsequent days, we just had to power it on, follow the startup instructions, and be patient. Again, more details of this will be in the RDO blog post in the coming  weeks.

Upcoming CentOS Dojos

I had conversations with two groups about planning upcoming CentOS Dojos.

The first of these will be at Oak Ridge National Labs (ORNL), and is
now tentatively scheduled for the first Tuesday in September.  (If you saw my internal event report, I mentioned July/August. This has since changed.) They’re interested in doing a gathering that would be about both CentOS and OpenStack, and draw together some of the local developer community. This will be held in conjunction with the local LOPSA group.

The second Dojo that we’re planning will be at CERN, where we have a great relationship with the cloud computing group, who run what we believe to be the largest RDO installation in the world. We have a tentative date of October 20th, immediately before Open Source Summit in Prague to make it easier to combine two trips for those traveling internationally. This event, too, would cover CentOS topics as well as OpenStack/RDO topics.

If you’re interested in participating in either one of these events, you need to be on the centos-promo mailing list. Send mail to centos-promo-subscribe@centos.org to subscribe, or visit https://lists.centos.org/mailman/listinfo/centos-promo for the
clicky-clicky version.

General Impressions

The Community Central area at Red Hat Summit was awesome. Sharing center stage with the product booths was a big win for our upstream first message, and we had a ton of great conversations with people who grasped the “X is the upstream for Red Hat X” concept, seemingly, for the first time. The “The Roots Are In The Community” posters resonated with a lot of people, so huge thanks to Tigert for pulling those together at the last minute.

The collaboration between RDO and ManageIQ was very rewarding, and helped promote the CloudForms message even more, because people could see it in action, and see how the communities work together for the greater good of humanity. I look forward to expanding this collaboration to all of the projects in the Community Central area by next year.

The space for Red Hat Summit was huge, making the crowd seem a lot smaller than it actually was. The opposite was true for  OpenStack Summit, where it was always crowded and seemed very busy, even though the crowd was smaller than last year.

 

Where next?

In three weeks I’ll be heading to the High Performance Computing event in Frankfurt. My mission there is to talk with people that are using CentOS and RDO in HPC, and collect user stories.

No, Apache did not send you spam

Today, the ASF received yet another complaint from a distraught individual who had, in their opinion, received spam from the Apache Software Foundation. This time, via our Facebook page. As always, this is because someone sent email, and in that email is a link to a website – in this case, www125.forcetwo.men , which is displaying a default (ie, incorrectly configured) Apache web server, running on CentOS.

This distraught individual threatened legal action against the ASF, and against CentOS, under FBI, Swedish, and International law, for sending them spam.

No, Apache didn’t send you spam. Not only that, but Apache software wasn’t used to send you spam. Unfortunately, the spammer happened to be running a misconfigured copy of software we produced. That’s the extent of the connection. Also, they aren’t even compentent enough to correctly configure their web server.

It would be like holding  a shovel company liable because someone dug a hole in your yard.

Or, better yet, holding a shovel company liable because someone crashed into your car, and also happened to have a shovel in their trunk at the time.

We get these complaints daily, to various email addresses at the Foundation, and via various websites and twitter accounts. While I understand that people are irritated at receiving spam, there’s absolutely nothing we can do about it.

And, what’s more, it’s pretty central to the philosophy of open source that we don’t put restrictions on what people use our software for – even if they *had* used our software to send that email. Which they didn’t.

So stop it.

 

Open source stats – but what do the numbers *mean*?

I recently sent a report to project management containing some numbers that purport to describe the status of the RDO project.

I got a long and thoughtful response from one of the managers – we’ll call him Mark – and it seems worthwhile sharing some of his insights. To summarize, what he said was, don’t bother collecting stats if they don’t tell a story.

1. Focus on the goals

Listing a bunch of numbers without context – even with pretty graphs – doesn’t tell us anything unless you relate them to goals that we’re trying to achieve.

Several weeks ago I presented a “stakeholder review” to this same audience. Any statistics that I present in the future should be directly related to a goal in that review, or they are just meaningless numbers, and possibly a distraction, and, worse still, might cause people to work towards growing the wrong metric. (Google for “be careful what you measure” and read any of those articles for more commentary on this point.)

2. Focus on the people

One of the stats that I provided was about how certain words and phrases feature in the questions on ask.openstack.org. Mark looked beyond the numbers and saw three people who are very active on that website, two of whom are not obviously engaged in the RDO community itself. Why not? How can we help them? How can they help us? What’s their story? Why are we ignoring them?

3. Focus on the blips

In February, our Twitter mentions, retweets, visits, and so on, went through the roof. Why? And why didn’t we do that same thing again in March?

As it turns out, in February there were two conferences that contributed to this. But, specifically, we captured a lot of video at those events, and the Twitter traffic was all around those videos. So clearly we should be doing more of that kind of content, right?

4. Ignore the stuff that doesn’t seem to mean anything

We track “downloads” of RDO, which roughly speaking means every time someone runs the quickstart and it grabs the RPM. Except RDO is on a mirror network, so that number is false – or, at best, it reflects what the trends might be across the rest of the mirror network. So we have no idea what this metric means. So why are we bothering to track it? Just stop.

5. Ask not-the-usual-suspects

This last one wasn’t one of Mark’s observations, but is what I’m taking from this interaction. We tend to ask the same people the same questions year after year, and then are surprised that we get the same answers.

By taking this data to a new audience, I got new answers. Seems obvious, right? But it’s the kind of obvious thing we overlook all the time. Mark provided insight that I’ve been overlooking because I’m staring so hard at the same things every day.

By the way, I’ve presented Mark’s insight very bluntly here, because it’s important to be clear and honest about the places where we’re not doing our job as well as we can be. Mark’s actual response was much kinder and less judgmental, because Mark is always kind and supportive.

Moving to CentOS

TL;DR: Leaving OpenStack; Moving to CentOS; Still at Red Hat.

4 years ago, I came to Red Hat, and started as the OpenStack Community Liaison, working primarily with the RDO project, but more generally with all of Red Hat’s involvement in the upstream OpenStack project.

I took over from Dave Neary, but it took a while to actually replace him. His depth of knowledge and experience with the community were not easy to step into.

Over those 4 years, I’ve become much more knowledgeable about OpenStack – the community as well as the technology. It’s a wonderful community, with a passion for open source, for doing things transparently and collaboratively, and for doing things well. The individuals in the community have been great to get to know – both people here at Red Hat, as well as people in other organizations, and at the OpenStack Foundation. I could certainly call out dozens of individuals who have made my time with OpenStack smoother. The names that come to mind are Haikel Guemar, Stefano Maffuli, Perry Meyers, Eliska Malikova, Alan Pevec, Jakub Ruzicka, Rain Leander … see, I knew that as soon as I got started I would find that there’s no end in sight.

Rain, in particular, has really stood out as someone that was hugely passionate about the community around OpenStack, and was just such a delight to work with, particularly when we were able to attend the same events and work together in person.

This is why I’m so excited to announce that Rain will be taking my place as the RDO/OpenStack community manager for Red Hat, effective immediately. I cannot think of anybody more qualified, in skills and temperament, for this position, and I am completely confident that I’m leaving the community in good hands. One develops a lot of ownership of a project over four years, but I have no doubt she’ll take care of the project.

I’m not leaving Red Hat, though. Instead, I’m moving over to be more active in the CentOS community. CentOS is an exciting community that is absolutely critical to Red Hat. It’s the place where community projects, like RDO, as well as many others, do their development and testing, before being deployed and supported on Red Hat Enterprise Linux. I’ll be focusing on a variety of things, including CentOS in HPC (High Performance Computing) and IoT (Internet of Things).

CentOS presents a number of challenges from a community perspective, and I’m very pleased to be more active there. It will be an interesting and challenging place to be, and I’ll be, once again, working with an awesome group of people. I’m sure I’ll be telling lots of CentOS stories on this website in the years to come, so stay tuned.

Follow RDO on Twitter at @rdocommunity and follow CentOS on Twitter at @centos to keep up to date with the two communities, as well as learn how we work together.

todo.txt

Reposting from an email I sent a while back:

As several people have asked about my todo list within the last 2 weeks, I thought I’d share the goodness with everyone.

I’ve been using todo.txt for about a year now. http://todotxt.com/

Don’t let the website fool you. todo.txt isn’t (primarily) a gui app, or a phone app. The todo list is in a plain text file. There’s a dozen different tools that you can use to manage it, but I just use the command line:

t ls – what’s in my list?
t add ITEM – Adds ITEM to my todo list
t pri ## A – Makes item ## priority A
t do ## – Marks item ## as done, moves it to DONE list for later reference
t ls blarg – Lists todo items that match ‘blarg’
t lsp A – Show me all the things that are priority A
done – An alias to ‘cat ~/Dropbox/todo/done.txt’ which shows me what I’ve done most recently

If you happen to store your todo list in your Dropbox directory, you can then also use the free Android app to manage your todo list from your phone. (I’ve heard it also work with google drive, or owncloud, or a variety of other things.)

As someone who has used every possible todo list out there, including a dozen issue trackers, and writing a few different todo list webapps, sticking with a single tool for a whole year is unprecedented. Being able to work from the command line made all the difference for me, since that’s where I always am anyways.