MA project: Video content and production

As the sixth week of production comes to an end, it’s time I stepped back from coding to concentrate on the video’s content.

Throughout the research stages of the project, I have wanted to test the proportionality of the Investigatory Powers Act – does its benefits justify an increased state of surveillance on members of the public?

I’m aware that asking such a big question could warrant all sorts of responses, and so I contacted a huge range of qualified individuals to comment. While a lot of them are unavailable, I have managed to arrange interviews for this coming week with two people I hope will give contrasting ideas. The first is a former director of GCHQ, and the second is a member of a privacy rights group.

Preparing questions for both interviews has been awkward – I’ve know for months the kind of things I’d like to ask somebody from both sides. Unfortunately, I don’t have a day in each camp, but instead half an hour. I’ve taken my list of a hundred questions down to just seven, with the expectation of asking a few followups. That’s for both of my interviews.

All I need now, are a couple of more people to say the magic ‘yes’, and a code to work.

Progress in this area has been slow – primarily because of running through shooting scripts, question lists, risk assessments, and adding a healthy contribution to my critical evaluation.

In my last blog post, I ended with a basic show/hide function that hides videos embedded underneath. I’ve swapped things around since, making the embedded videos hide by default, only to appear when the button is clicked. The thinking behind this is that it doesn’t overpower a viewer on page load, and doesn’t tempt them to skip through content (example: version one).

But on that second point, I have also experimented with a skip button. I’ve written three into the project (example: version two) that allow the user to skip to 60/120/180 seconds of video play. The idea behind this was to then translate this skip into a ‘skip and show’ function, where when he video reaches a certain time it triggers the appearance of the embedded video underneath, rather than needing the button. This extra step is still a working progress, but a forked code on CodePen highlights the first attempt of this.

I also added a section of trial code that would show the embedded video buttons after page load time.

Looking back at the interviews themselves, I wanted to develop the best way of showing the interview in its full context. I asked the question: “how can I give the viewer the best possible way of viewing the interview?”

The first thing that came into mind was “letting them sit in on it?”

In other productions, I’ve taken two cameras to film both the interviewee and myself. Obviously this requires two cameras, but also two tripods, at least two microphones (although I usually double-mic everything, so four/five), and longer to set up and pack down. Travelling with all of this equipment, while maintaining a professional appearance, is tricky. It’s trickier when you can only get venue access for one, and aren’t able to bring a mate down for the ride.

So ideal for production, not so ideal physically.

I went back to the question, and to the answer I thought of. It took a good few moments before I even thought about virtual reality.

Virtual reality.

360º video is something I’ve experimented with only once before, in still image form. During my research I had come across an interactive 360º video – specifically a shop tour where you could click on products you liked and then go onto buy them, all within the video browser. It was a crazy idea, and worked amazingly well on mobile devices.

I also reflected back on an episode of Click, BBC’s technology news programme, and an episode they filmed entirely with 360º equipment (watch at the bottom of this blog post). The outcome was extraordinary – and I remember spinning around with my phone getting an idea of what was going on behind the scenes. I remember the framing of their shots – especially those in a make-shift studio. As an audience member, I felt like I was in the room with Spencer Kelly and his guest.

As my project is about widening the viewing experience for the audience, I figured it would be worth exploring a 360º layer. That’s not to say the entire production is going into VR; I am still organising a flat online video as the primary source of content. Instead, I am going to see how viable it is to allow the viewer to see the whole interview in 360º.

What this means is that if you wanted to ‘experience the whole interview’, you would essentially be taking a seat at the table – rather than just watching it.

I’m not entirely sure how it’s going to work as a finished product, but my testing so far has proven relatively successful. The only hesitation I have is that while the test video I produced was easily embeddable on a Thinglink project, it was not so translatable into the video I am coding. If there’s one piece of mind though, it’s that I haven’t got to worry about carrying so much equipment. While the primary camera kit will stay the same, the 360º camera is small, and only needs a free-standing selfie stick.

In the meantime, I shall continue to work on my critical evaluation. Hopefully in a week’s time, I’ll have a positive reaction to how the two interviews went, and depending on production, maybe even something to preview!


Comments are closed.