In this mornings community managers meeting at work, I presented on using video as part of our community promotion. As I said at the beginning, this is a hobby which I enjoy – although I’m far from being an expert. And I’m trying to figure out if it’s actually useful as part of promoting our projects.
Here’s a summary of what I talked about, and some of the questions that were asked.
This camera is available in a number of different models. I got the one that had the largest on-board storage, so that I didn’t have to mess with SD cards. However, you can get it a lot more cheaply with less on-board storage, and get as large an SD card as you think you’re going to need. For reference, a minute of raw video at default settings on my camera takes roughly 150MB. Plan accordingly.
However, the on-board mic on the camera is pretty good.
Remember that you have a better video camera in your pocket than most movie makers have had in the history of film. Full feature films are being made with iPhones these days.
Meanwhile, I’ve been trying to justify the expense of a GoPro for some time. I really want one, but I realize this is mostly geek appeal, and I probably wouldn’t get enough use out of it to justify the cost. The main reason I didn’t buy one as my original camera is that it doesn’t have an audio input jack, making it less than ideal for the kind of interview situations that I started with.
I edit video with kdenlive – https://kdenlive.org/ – which is free and available for most modern operating systems. It took me perhaps 5 hours to get comfortable with it, and another 10 to feel like I’m really good at it.
Now, if you’re going to be a professional, you’ll probably end up using some expensive software from Adobe, or from Apple. And that’s great. I’m sure they are objectively better, in ways that professional videographers can appreciate. But for us amateurs, kdenlive has everything that we’re likely to need. And it’s free.
Several years ago I tried to teach myself to use iMovie – back when I was a Mac user – and found it incredibly intimidating. I’m curious how I would feel about it now that I know more.
There is a huge selection of free music available at Free Music Archive – http://freemusicarchive.org/ – in a wide variety of genres. I have always found something that seemed to fit the video.
Here’s one of my favorite videos featuring music from there:
(Note: If you’re reading this on Facebook, you won’t see the video. Follow the link at the top to my website.)
What’s unclear to me about FMA is what their business model is, and how all of these talented artists are making a living. Sometimes I feel bad about that.
Reasons for making videos:
There’s lots of reasons for making videos. What your reasons are will greatly affect how much time and effort you put into it, what kind of video you make, how you promote it, and so on.
Some reasons include:
Because making videos is fun
To show off a project/hobby/interest/yourself/something cool
To draw people in, so that they go to another site
To advertise a product/project
To put a human face on some project/product/concept
For my personal channel, it’s primarily the first two reasons. This morning I posted a video of a caterpillar, because I wanted to. There’s really no other reason.
For work, though, I have to justify the time that I spend, in terms of what the measurable benefits are. Mostyly, in that case, it’s the other four reasons.
Which leads to …
Measuring whether it’s effective:
Measuring whether what we do at work advances the goals of the company/project/whatever is tricky. If I enjoy doing something (like making videos) then I’m likely to think that it’s a useful thing to do, and look for reasons that support that.
But it’s important that it is actually effective, for some definition of effective. Does it bring traffic to the site? Does it educate people in performing some particular task. In fact, is anybody at all watching the videos, or are they just sitting there sad and lonely?
What to make videos of:
In short, everything. However, this can be extremely time consuming if you want to do postproduction, so you might want to be selective.
For the purposes of work, I recommend:
Events/Presentations. If you’re at an event, capture video of the sessions you attend. The people that weren’t there sometimes appreciate this. Make sure you ask the speaker if it’s ok. Make sure you have a mic, and that the speaker users it.
Meetings. Maybe. If you use some kind of video conferencing meeting service, and if the content of the meeting might be useful to someone else that couldn’t make it, press record. Why not? I wish I’d recorded the meeting this morning, for example, since I’m sure I’m forgetting something that was discussed.
People. People like to talk. Most of them, anyways. If you ask them to write a blog post or article, they’ll all say yes, but almost none of them will actually do it. But get them in front of a camera and start asking questions, and most people will have something useful to say. Everyone is an expert on something, and they like to talk about it.
YouTube has been kind enough to provide us with infinite storage for anything we want to create. Granted, they have their own reasons for this. But you might as well take advantage of it.
Other topics and questions that came up:
Q: Do you ask people to sign any kind of waver when you take video, saying that it’s ok to put it online for the whole world to see?
A: I never thought of that. I probably should. Seems that it’s also company policy for me to do so, so I should start right away. However, I always clearly set the expectation – “This will be on YouTube, ok?” – and get verbal assent, and this hasn’t, so far, resulted in any problems.
Q: How much time does it take?
A: It depends. If you’re getting a video of a presentation or a meetup, just post the raw video and be done with it. Total time: Length of event plus upload time. If you’re doing a more formal interview, it can take around 5-10 minutes per minute of final product. And if you’re doing something more elaborate like piecing together several clips and trying to tell a coherent story, it can take hours to make a 5 minute product.
And I’m sure that people that do this for a living are both better than I am at it (so they can do it faster) and more careful (so they do more work in that time, for a better finished product).
One thing I didn’t mention is that rendering the video (the process which produces the final uploadable file) takes a really long time. Around 5-10 minutes per minute of video, depending on your computer.
So, really, you need to experiment, and figure this out.
Q: How do you have people prepare for an interview?
A: I always provide a list of questions ahead of time. Usually a day or two before, so that they can think about what they’re going to say. Giving them a sample video of someone else answering similar questions is a great way to get them prepared. Then at the end, I always ask “was there something that I didn’t ask?” so that they can put in what they actually wanted to say. I then edit that bit into a relevant place in the conversation.
Q: Who are some YouTube’ers that you like to watch.
A: I like Casey Neistat – https://www.youtube.com/user/caseyneistat – because he’s really good at this, and his videos are, mostly, a lot of fun. And some of them aren’t. But he just post stuff because it interests him, and this seems to work out for him. (Warning: He’s not careful about his salty language. Sometimes.) My kids watch hours and hours of Good Mythical Morning. https://www.youtube.com/user/rhettandlink2 These guys are insane.
Related, I hate technical videos that take information that could be adequately stated in 2 lines of text, and make a 15 minute rambling video about it. There’s thousands of “how to install Whatever on CentOS” videos out there that, tell you to type “sudo yum install kdenlive” but take 15 minutes of boring voiceover to do it. Thanks. I’ll do without.
Q: What’s the hardest thing about starting making videos?
A: Feeling that you don’t have anything that anybody will want to watch, and it’ll just be stupid. Solve this by just looking at Youtube. 300 hours of videoare uploaded to YouTube every minute. That’s almost 50 YEARS of video every day. And, sure, most of it, nobody ever watches. But I guarantee you have something more valuable to say than about 45 of those 50 years of content.
(Note: If you’re reading this on Facebook, you’re missing half of the story. Please follow the link to my website to see the rest of the post.)
I’ve been tinkering with making videos for a while, and have been learning a lot over the last year or so. Last week I decided to make a video about something not work related, and picked home automation. As it turns out, what I have to say about home automation fits naturally into 4 (or possibly more) episodes.
Here’s the first one.
What I learned from doing this video:
Lighting is important. This is way too dark
Need to actually show more of what you’re talking about, than just talking about it.
And here’s my second one:
I’m very pleased with this second video, but I still learned a few things from it. I learned a lot about the tool I’m using (kdenlive) and what you can do to paste various tracks together. Also, watching it again, I’m sure I can do better on sound levels. There’s just too much difference between the tracks where I’m sitting at my desk, and when I’m doing to “on site” clips. The former, I’m using my desk mic, and the latter is using the onboard mic on my camera. My desk mic is a better mic, but I just have it turned down lower. I can probably also adjust this when I’m editing the video.
In my next episode, I’ll be talking about the Phillips Hue products, and in the fourth, the Osram/Sylvania Lightify product.
I came into the ISC event pretty ignorant. Here’s some of the things I’ve learned.
Supercomputers run Linux. All of them. This isn’t even a topic of discussion. Yes, I’m sure there are some that don’t, but everyone here just assumes that you are running Linux. And probably two or three Apache products.
Supercomputing isn’t about software. This is a hardware conference.
Supercomputing is primarily about how fast you can get rid of heat. And these people are serious about cooling. I’ve seen some amazingly cool cooling rigs. Perhaps the coolest of them was this one: https://youtu.be/hs9WG0ZA79Q That unit is called the AIC24, and is manufactured by Asperitas, and is a full submersion rack. You lower your blades into oil, which is in turn cooled by a water cooling pump. This is much quieter than fans, and much more efficient. The oil was cool enough to touch. Enormous supercomputing centers are locating on the edge of lakes specifically so that they can pump cool water from the lake into cooling units like this.
I also saw this cool demo: https://youtu.be/aaEQN8DH0kM You can actually see the oil boiling on the processor. The vapor is then condensed on a cooling unit in the back and trickles back down into the tank.
I have also been blown away by the Student Cluster Competition. These kids have access to hardware that would have blown my mind when I was in school. There’s 11 teams competing on a variety of metrics, and they have these astonishing supercomputers at their disposal. I was also amazed to discover thatLINPACK is still one of the standard benchmarks. I used that when I was in college!
The student hardware is all sponsored by the vendors that are here at this event – presumably so that they can benefit from the publicity when they win the contest. Check out some of these rigs:
I was pleasantly pleased to discover that of the 11 teams competing, 8 are running CentOS. One other was running Fedora – they wanted to run CentOS, but needed a newer kernel for something (I wasn’t very clear on what that was. I’ll try to go find out more information today.) The other two were running Ubuntu. CentOS also appears to be the preferred platform for the various research institutes I’ve talked to. However, these are the groups that chose to come over to the Red Hat booth and talk to me, so I do acknowledge that this is a rather self-selected sample. The sign on the SuSE booth claims that SuSE is the Linux “most used by the top 100 supercomputers.” More research is warranted here. But it appears clear, at least from this small sample, and from conversations with the students, that CentOS is just What You Run when you’re doing supercomputing.
And finally, I’ve learned (not that it’s a big surprise) that one year of high school German, 30 years ago, is not a great deal of help. And that people are amazingly patient and kind with my ignorance – something that I’ve discovered almost everywhere in the world
In George R R Martin’s books “A Song of Fire and Ice” (which you may know by the name “A Game of Thrones”), the people of Braavos,
have a saying – “Valar Morghulis” – which means “All men must die.” As you follow the story, you quickly realize that this statement is not made in a morbid, or defeatist sense, but reflects on what we must do while alive so that the death, while inevitable, isn’t meaningless. Thus, the traditional response is “Valar Dohaeris” – all men must serve – to give meaning to their life.
So it is with software. All software must die. And this should be viewed as a natural part of the life cycle of software development, not as a blight, or something to be embarrassed about.
Software is about solving problems – whether that problem is calculating launch trajectories, optimizing your financial investments, or entertaining your kids. And problems evolve over time. In the short term, this leads to the evolution of the software solving them. Eventually, however, it may lead to the death of the software. It’s important what you choose to do next.
You win, or you die
One of the often-cited advantages of open source is that anybody can pick up a project and carry it forward, even if the original developers have given up on it. While this is, of course, true, the reality is more complicated.
As we say at the Apache Software Foundation, “Community > Code”. Which is to say, software is more than just lines of source code in a text file. It’s a community of users, and a community of developers. It’s documentation, tutorial videos, and local meetups. It’s conferences, business deals and interpersonal relationships. And it’s real people solving real-world problems, while trying to beat deadlines and get home to their families.
So, yes, you can pick up the source code, and you can make your changes and solve your own problems – scratch your itch, as the saying goes. But a software project, as a whole, cannot necessarily be kept on life support just because someone publishes the code publicly. One must also plan for the support of the ecosystem that grows up around any successful software project.
Eric Raymond just recently released the source code for the 1970s
computer game Colossal Cave Adventure on Github. This is cool, for us greybeard geeks, and also for computer historians. It remains to be seen whether the software actually becomes an active open source project, or if it has merely moved to its final resting place.
The problem that the software solved – people want to be entertained – still exists, but that problem has greatly evolved over the years, as new and different games have emerged, and our expectations of computer games have radically changed. The software itself is still an enjoyable game, and has a huge nostalgia factor for those of us who played it on greenscreens all those years ago. But it doesn’t measure up to the alternatives that are now available.
Software Morghulis. Not because it’s awful, but because its time has
Winter is coming
The words of the house of Stark in “A Song of Fire and Ice”, are “Winter is coming.” As with “Valar Morghulis,” this is about planning ahead for the inevitable, and not being caught surprised and unprepared.
How we plan for our own death, with insurance, wills, and data backups, isn’t morbid or defeatist. Rather, it is looking out for those that will survive us. We try to ensure continuity of those things which are possible, and closure for those things which are not.
Similarly, Planning ahead for the inevitable death of a project isn’t defeatist. Rather, it shows concern for the community. When a software project winds down, there will often be a number of people who will continue to use it. This may be because they have built a business around it. It may be because it perfectly solves their particular problem. And it may be that they simply can’t afford the time, or cost, of migrating to something else.
How we plan for the death of the project prioritizes the needs of this community, rather than focusing merely on the fact that we, the developers, are no longer interested in working on it, and have moved on to something else.
At Apache, we have established the Attic as a place for software projects to come to rest once the developer community has dwindled. While the project itself may reach a point where they can no longer adequately shepherd the project, the Foundation as a whole still has a responsibility to the users, companies, and customers, who rely on the software itself.
The Apache Attic provides a place for the code, downloadable releases, documentation, and archived mailing lists, for projects that are no longer actively developed.
In some cases, these projects are picked up and rejuvenated by a new community of developers and users. However, this is uncommon, since there’s usually a very good reason that a project has ceased operation. In many cases, it’s because a newer, better solution has been developed for the problem that the project solved. And in many cases, it’s because, with the evolution of technology, the problem is no longer important to a large enough audience.
However, if you do rely on a particular piece of software, you can rely on it always being available there.
The Attic does not provide ongoing bug fixes or make additional releases. Nor does it make any attempt to restart communities. It is
merely there, like your grandmother’s attic, to provide long-term storage. And, occasionally, you’ll find something useful and reusable as you’re looking through what’s in there.
The Apache Software Foundation exists to provide software for the public good. That’s our stated mission. And so we must always be looking out for that public good. One critical aspect of that is ensuring that software projects are able to provide adequate oversight, and continuing support.
One measure of this is that there are always (at least) three members of the Project Management Committee (PMC) who can review commits, approve releases, and ensure timely security fixes. And when that’s no longer the case, we must take action, so that the community depending on the code has clear and correct expectations of what they’re downloading.
In the end, software is a tool to accomplish a task. All software must serve. When it no longer serves, it must die.
As part of Stormy’s ongoing blog challenge, here’s my take on “Three best features of open source events.”
1. The hackathon
While there is considerable evidence that the term “hackathon” should be avoided (No, I can’t find the article right now. I’ll keep looking), the collaborative space at an event is, in my opinion, the most important part of an open source event.
Open source events are educational, of course. You can attend a talk and learn things. But most of the information that you need to learn is available, free, online. So to me the most important part of an event is the opportunity to meet and collaborate with the other people on the project.
Defining a specific space for this is critical to get people to sit down and play along. Signs identifying project teams or topics is even more welcoming. Having a white board where people can identify specifically what they are working on gives a way for introverts to be overtly welcoming of other people with similar interests.
Publicizing the collaborative space well in advance of the event gives the opportunity for people to discuss what they might work on, and gives some people the added incentive to show up at all.
2. The after-party
While it’s indeed a cliche (because it’s true!) that open source events have too much alcohol, having an after-event, with or without food and/or drinks, is a critical part of the event. It gives a specific time and place for your community to get to know one another in a less formal atmosphere, and talk about something other than code. These kinds of community bonds will simply never happen on the mailing list, which is by design focused on the project, the code, the design, and so on, rather than on the personalities.
Open source communities fail because of personality issues at least as often as they do because of code issues. Providing a specific time and space to address these issues saves communities. As we say at Apache, Community > Code.
3. The keynotes
Picking good keynotes is really hard, because keynotes should be inspiring. As such, they don’t always have to be directly related to the topic of the event, but should be, in some way, of interest to the audience.
A keynote should be delivered by someone who is engaging and eloquent. And it should have some kind of call to action, or end on a note that inspires the audience to go do something.
(This is an abridged version of the report I sent to my manager.)
Last week I attended ApacheCon North America in Miami. I am the conference chair of ApacheCon, and have been for on and off for about 15 years. Red Hat has been a sponsor of ApacheCon almost every single time since we started doing it 17 years ago. In addition to being deeply involved in specific projects, such as Tomcat, ActiveMQ, and Mesos, we are tangentially involved in many of the other projects at the Apache Software Foundation.
Presentations from ApacheCon may be found at https://www.youtube.com/playlist?list=PLbzoR-pLrL6pLDCyPxByWQwYTL-JrF5Rp (Yes, that’s the Linux Foundation’s YouTube channel – this ApacheCon was produced by the LF events team.)
I’d like to draw specific attention to Alan Gates, at Hortonworks, who has developed a course to train people at the company in how to work with upstream projects. He did this because, as the company expanded from a small group of founders who deeply understood open source, to thousands of employees who kinda sorta got it, but not always.
Also of great interest was the keynote by Sandra Matz about what your social media profile tells the world about you. It’s worth watching all the way to the end, as she doesn’t just talk about the reasons to be terrified of the interwebs, but also about how this kind of analysis can actually be used for good rather than evil.
During the first two weeks of May, I attended Red Hat Summit, followed by OpenStack Summit. Since both events were in Boston (although not at the same venue), many aspects of them have run together.
On the first day of Red Hat Summit, I received the mini-cluster, which had been built in Brno for the April Brno open house. There were one or two steps missing from the setup instructions, so with a great deal of help from Hugh Brock, it too most of the first day to get the cluster running. We’ll be publishing more details about the mini-cluster on the RDO blog in the next week or two. However, most of the problems were 1) it was physically connected incorrectly (ie, my fault) and 2) there were some routing table changes that were apparently not saved after initial setup.
Once the cluster was up, we connected to the ManageIQ cluster on the other side of our booth, and they were able to manage our OpenStack deployment. Thus, we were able to demonstrate the two projects working together.
In future events, we’d like to bring more projects into this arrangement – say, use Ceph for storage, or have ManageIQ managing OpenStack and oVirt, for example.
After we got the cluster working, in subsequent days, we just had to power it on, follow the startup instructions, and be patient. Again, more details of this will be in the RDO blog post in the coming weeks.
Upcoming CentOS Dojos
I had conversations with two groups about planning upcoming CentOS Dojos.
The first of these will be at Oak Ridge National Labs (ORNL), and is
now tentatively scheduled for the first Tuesday in September. (If you saw my internal event report, I mentioned July/August. This has since changed.) They’re interested in doing a gathering that would be about both CentOS and OpenStack, and draw together some of the local developer community. This will be held in conjunction with the local LOPSA group.
The second Dojo that we’re planning will be at CERN, where we have a great relationship with the cloud computing group, who run what we believe to be the largest RDO installation in the world. We have a tentative date of October 20th, immediately before Open Source Summit in Prague to make it easier to combine two trips for those traveling internationally. This event, too, would cover CentOS topics as well as OpenStack/RDO topics.
If you’re interested in participating in either one of these events, you need to be on the centos-promo mailing list. Send mail to email@example.com to subscribe, or visit https://lists.centos.org/mailman/listinfo/centos-promo for the
The Community Central area at Red Hat Summit was awesome. Sharing center stage with the product booths was a big win for our upstream first message, and we had a ton of great conversations with people who grasped the “X is the upstream for Red Hat X” concept, seemingly, for the first time. The “The Roots Are In The Community” posters resonated with a lot of people, so huge thanks to Tigert for pulling those together at the last minute.
The collaboration between RDO and ManageIQ was very rewarding, and helped promote the CloudForms message even more, because people could see it in action, and see how the communities work together for the greater good of humanity. I look forward to expanding this collaboration to all of the projects in the Community Central area by next year.
The space for Red Hat Summit was huge, making the crowd seem a lot smaller than it actually was. The opposite was true for OpenStack Summit, where it was always crowded and seemed very busy, even though the crowd was smaller than last year.
Today I received email from a service I use – Expensify. In this message, the CEO of the company acknowledged that the name that they had chosen for one of their services was a bad choice, and they were consequently changing it:
2) “Wingman” renamed to “Copilot”
Remember how we had the genius idea of naming our amazing delegated access feature (where one user can sign in to another’s account to help them out) “Wingman”? As a child of the 80’s I just assumed that name conjured up images of Top Gun fighter jets and double high-fives in everyone. But it turns out that to the children of the 90’s and beyond, it means cruising bars and picking up chicks — who knew? Actually, almost everyone it seems. So, bowing to the wisdom of the crowd, “Wingman” is now the less-offensively named “Copilot”. My bad!
Meanwhile, the President of the United States of America made a typo on Twitter (no big deal, we’ve all done it) and then, rather than just saying “oops”, sent his press secretary out on stage to claim that it was intentional – a coded message, no less.
There’s a video at http://thehill.com/homenews/administration/335809-spicer-offers-cryptic-explanation-for-trump-covfefe-tweet if you missed it, or don’t believe me.
One of the benchmarks of becoming an adult is an ability to admit an error. One of the marks of being a child is that one defends one’s mistakes, even when they are inconsequential, like a typo.
One day, I hope we have an adult in the White House again.