Tired, but happy
No, FOSDEM is not over yet, even if that title might suggest otherwise.
I've just arrived home from a very long day.
It started this morning at around 9:00 AM, when I went to my office, where we did a few final tests on our video setup, after which we loaded everything up and drove to the ULB campus. Most of the rest of the day was lost to setting up stuff, and more testing.
We were mostly ready around 21:00, but also mostly tired. Whatever was left was some stuff in rooms that would only open at 13:00; so we decided to call it a day. I then went to the beer event, for about an hour or so. Not because I wanted to get drunk, but because I wanted to meet a few people. That worked, although I suspect most people that I know had left already, because I didn't see many people.
Right now, it's time for bed. With a healthy dose of work, some determination, and a little bit of luck, this weekend everything will run smoothly.
See you then.
Video at FOSDEM 2013
For the second year in a row, I've led the team that organized video at FOSDEM. This year, most things went pretty smoothly; more so than last year. We definitely learned from some of our mistakes, and although I can't say that everything went better than it did last year, after having done a first look at things I can certainly say that overall, the result is better now than it was last year.
Our setup was very similar to what it was last year. However, there were some key differences:
- Last year, we used two semi-professional cameras (with XLR audio inputs), two HD camcorders (with only mini-jack audio inputs), and one old SD-only camcorder (similarly, with only mini-jack audio input). This year, we decided to rent three XLR-capable cameras instead; while this drove the cost up, it seriously reduced the complexity for the audio setup. As a result, we had much less issues with talks that only had audio recorded.
- Due to some misunderstanding, last year the ULB's A/V department was not contacted to request access to their equipment. As a result, we had to do our own audio set-up in Janson and the K auditorium. This year, we made sure that this problem did not repeat itself; as a result, in the K auditorium and in Janson, all we had to do was connect an XLR feed to the camera, and we had audio.
- Last year, the machines doing transcoding were two rented laptops, one fairly recent spare laptop that I had lying around, and one somewhat older desktop machine. This was needlessly complex. This year, I rented less laptops but added a server with significant diskspace that we could do the flumotion recording on. This made the streaming and recording somewhat less complex to administer.
- Most critically, this year I had arranged for special green Videoteam t-shirts. This made it much easier to recognize who my trained videoteam volunteers were, and was instrumental in ensuring we didn't have the same volunteers man the same recording station all sunday afternoon, as was the case last year.
Unfortunately, that didn't mean all was well. The internal server would transcode the DV stream into WebM, which was then sent on to the Flumotion platform where it was going to be transcoded to Ogg Theora, before being streamed in both formats. Due to an issue which we could not track down (possibly something incorrect in my configuration), the transcoding into Ogg Theora did not work smoothly, and even had the unwanted effect of disrupting the WebM streaming. Since this made it difficult to debug without disrupting service, eventually we decided to give up on Ogg Theora and only provide WebM streams; I considered that to be a better solution than no streaming at all.
In addition, apparently in the year since FOSDEM 2012, libav and gstreamer have changed significantly enough that the scripts that I used then had to be somewhat revisited in order to be able to transcode the existing recordings into WebM ready for video.fosdem.org. Oh well, I had to add in normalisation anyway; only it would've been nice if I didn't have to revisit it from scratch. Meh.
At any rate, our streams seem to have been very popular. Some statistics:
- Saturday
- Sunday
- Served traffic by country (over the whole weekend)
- Served sessions by country (over the whole weekend)
As you can see from these graphs, watching the streams from Belgium seems to be happening fairly often; I suspect much of that is people at the conference watching a stream in a different room. Additionally, we peaked at 212 viewers, which is slightly more than the peak of 206 that we had last year. The final conclusion we can take home from these statistics is that the most popular talk on Saturday (at least according to people watching the streams) was the eudev one in the cross-distro room (with 68 viewers); on sunday, the talks in Janson were all fairly equally popular. The peak of 102 viewers was reached around 15:45.
Overall, I'm quite happy with the result. Yesterday, Thomas Eugène of NamurLUG (who used to do video for several years before I took over last year) helped me do the first review. What's left now is to do some transcoding and uploading; this will happen over the next few days and weeks.
Transcoding videos
Since some people asked for it, and since it's (unfortunately) fairly nontrivial, here is the script that I'm using currently to transcode a .dv file into webm:
#!/bin/bash set -e newfile=$(basename $1 .dv).webm wavfile=$(basename $1 .dv).wav normalfile=$(basename $1 .dv)-normal.wav normalfile=$(readlink -f $normalfile) oldfile=$(readlink -f $1) echo "Audio split" gst-launch-0.10 uridecodebin uri=file://$oldfile ! progressreport ! audioconvert ! audiorate ! wavenc ! filesink location=$wavfile echo "Audio normalize" sox --norm $wavfile $normalfile echo "Pass 1" gst-launch-0.10 webmmux name=mux ! fakesink \ uridecodebin uri=file://$oldfile name=demux \ demux. ! ffmpegcolorspace ! deinterlace ! vp8enc multipass-cache-file=/tmp/vp8-multipass multipass-mode=1 threads=2 ! queue ! mux.video_0 \ demux. ! progressreport ! audioconvert ! audiorate ! vorbisenc ! queue ! mux.audio_0 echo "Pass 2" gst-launch-0.10 webmmux name=mux ! filesink location=$newfile \ uridecodebin uri=file://$oldfile name=video \ uridecodebin uri=file://$normalfile name=audio \ video. ! ffmpegcolorspace ! deinterlace ! vp8enc multipass-cache-file=/tmp/vp8-multipass multipass-mode=2 threads=2 ! queue ! mux.video_0 \ audio. ! progressreport ! audioconvert ! audiorate ! vorbisenc ! queue ! mux.audio_0 rm $wavfile $normalfile
It's fairly non-optimal, because many of these command-line A/V tools are either fairly badly documented, or have a horrific interface, or something similar.
avconv is supposed to be able to do audio normalisation, but I haven't been able to figure out exactly how it's done. The options that are specified in the manpage seemingly have no effect.
gstreamer has a 'ReplayGain' plugin which can do audio normalisation. Audio normalisation requires two passes; I could in theory just add the element in the gstreamer first-pass pipeline, add a '-t' to the gst-launch-0.10 invocation, and parse out the required gain value that I can then add to the second-pass pipeline. However, the elements that I can then pass that gain value to either have a different range for the gain (rgvolume's fallback-gain parameter) or expect a completely different unit of values with no obvious way to translate between the two (the 'volume' element's 'volume' parameter expects a multiplier, the replaygain plugin produces a dB value; a simple conversion seems wrong as it produces a value way out of range). The 'volume' element reportedly has an interface that takes a dB value, except you can't reach it from gst-launch-0.10. Bummer.
So instead we split out the audio, do normalisation in sox, and mux that back in during the second pass of the video transcoding. Sox is good. Sox is easy. If all A/V command-line tools were like sox, I would be smiling now.
What's missing from the above script is how to throw away the start or end of a file in case there's several minutes of uninterestingness there. This is easiest (in my experience) with avconv's -t and -ss options. I suspect gstreamer is capable of doing that, too; but I haven't figured out how, and this works.
You may want to play with some of options of the vp8enc element. For instance, the threads= value can be increased (or decreased) depending on how many cores your transcoding machine has. You could probably also add a "target-bitrate" value if you find the quality is too low or the system uses too much diskspace. VP8 has several more options; the documentation lists them all.
If you've done video recordings for a devroom at FOSDEM outside of the FOSDEM video team, note that FOSDEM is more than willing to host the videos. To do so, the easiest way is if the videos are online somewhere (this does not have to be a public system) that we can wget or scp them from; if you've done that, contact me with the location to the files and we'll put them on the FOSDEM video server.