John Wilson discusses how arts organisations are opening up their archives, collections and data with studio guests Bill Thompson, Head of Partnership Development, BBC Archives; Drew Hemment, Founder and CEO of Future Everything; and Dr Paul Gerhardt, Director of Archives for Creativity.
Including: what arts organisations need to consider when digitising their archives as we hear from bespoke archive Siobhan Davis Replay alongside the Google Art Project; the V&A and the Public Catalogue Foundation on how involving the public in tagging your digital archive can create new ways of experiencing a collection; how organisations can use their data innovatively and creatively, opening it up to the public and even developers; and the British Museum on working alongside other cultural institutions to revolutionise the way we search collections data through the web.
Download/subscribe to the podcast series on Arts Council England iTunes channel http://bit.ly/RnDpodseries
This blog post has been written by Colin Nightingale (Senior Producer, Punchdrunk)
I joined Punchdrunk almost ten years ago and during that time I have been heavily involved in the creation, production management and producing of virtually all the company’s major projects. Most recently, I consulted closely with the team building Sleep No More in New York to help realise the project within the given time. Since the shows opening, I have remained resident in the city for extended periods of time advising the executive producers (Emursive) on matters regarding the maintenance of the show and the development of secondary activity at the McKittrick Hotel (home of Sleep No More) without losing the integrity and attention of the main production. This placed me in an ideal position to consider the challenges that the NESTA funded research project has presented as we work together with MIT Media Lab to realise this ambitious project. My involvement in the whole process has been a great opportunity for me to learn directly from Media Lab about the capabilities of current technology. Since the early days of Punchdrunk, there has always been a desire to explore the territory between online environments and the tangible worlds we create and it is hugely exciting to have had the opportunity to delve deeper into this area than ever before and really start to imagine the future.
A major challenge for me personally has been working in isolation over here in New York, whilst the rest of the core Punchdrunk team has been working in London throughout the majority of the development phase. Punchdrunk’s creative process is naturally organic and constantly evolving but this has been especially true in this instance, given the unknowns that you inevitably face whilst working on a research project of this nature. However, the wonders of email and Skype (however imperfect it is at times) have been invaluable and have allowed us to stay well connected in between the short trips that the Punchdrunk and Media Lab teams have been able to make to New York.
Over the last two months, as the shape of the project has finally settled down, we have been working hard to solve the challenge of the integration of all the new ideas into the larger Sleep No More production. One major challenge, and something that I have personally found fascinating, has been considering how we create an experience for the live participants in the ‘show world’ that doesn’t disrupt or prove detrimental to the wider Sleep No More audience. This has involved careful analysis of the character performance loops in order to identify spare windows of time in specific locations in the building to exploit for this project. Thanks to the incredibly detailed information that has been carefully documented by Carrie Boyd and the Stage Management team, this has been a relatively painless process. The daunting part has been discovering how intricately we had originally filled the building with activity. Therefore we have come to realize that the windows of opportunity have been smaller and fewer than we first envisaged and we will be working hard in the rehearsals to make sure that everything is possible in the pockets of time that do exist.
A Special Ops design team has been formed to work on some new design ideas that will be installed in the space especially for the NESTA project. Under the guidance of the original Designers Livi Vaughan and Beatrice Minns, new props have been sourced and made, all in keeping with the overall aesthetic of the McKittrick Hotel.
We have also been making steady progress with the installation of the new technical infrastructure that the project requires to make possible all the incredible ideas that MIT Media Lab have developed to allow the online and real world participants to explore their separate environments whilst connected, and at times able to interact and guide each other. Over 8000 ft of new cable runs have been installed around the building and this weekend Ben Bloomberg from Media Lab has been in New York making final adjustments to ensure that all the cable is working and that the newly installed internet is fully operational ready for the start of some intense testing over the coming weeks.
All of this activity has had to be scheduled around the general day to day maintenance and rehearsals that are required to keep Sleep No More running 8 times a week along with the many one off events that are now being hosted in the 100,000 sq ft of the McKittrick Hotel. There are around 150 people working in the building on a weekly basis and we greatly appreciate everyone’s help in accommodating this project, from the producers (Emursive) with their willingness and flexibility to Wayne and his amazing maintenance team who have even kindly given up part of their workshop to become the control center for the project. Excitement and intrigue is growing, and everyone in the building is very much looking forward to beginning the testing phase and finally seeing all the abstract ideas that we have been trying to describe become a reality.
It has been relatively quiet, project-wise for the past 2 weeks. Tom has left IWM and the new project leader, Carolyn (Head of New Media at IWM) and project partner Claire have been at Museums and the Web in America. The back end boys, KI and Gooii have been coding back and forth in the north. At the museum myself and Wendy (Digital Projects Manager at IWM) have been picking up snagging issues on the SI kiosks in the A Family in Wartime exhibition.
Those first 6 social interpretation kiosks and QR codes have been live for a couple of weeks now. Hardware and software issues with the ASUS tablets have meant snagging has been a little out of proportion to the small number of kiosks installed. Some of this has been because we built the interface from scratch, and worked up to the day of installation. And some of the issues are with the kiosk housing design – pressure on the touch screens is confusing them and meaning re-sets are needing to happen too often.
Comments are coming through though. Consisting of lots of spam, lots of bad spelling and some genuine social commentary. Nothing properly offensive has been reported yet.
I keep popping down to watch people using the kiosks. Love a bit of unofficial visitor evaluation, I do. But it’ll be good to see what Claire comes up with when she gets back to the business of properly evaluating things.
Overall, it appears visitors are confused with our dual voice interface design, as well they might be. Combining a museum voice (digital label) and visitor voice screen (comments) was only ever going to be a compromise. It was a bit of a triumph of internal stakeholders over true visitor experience. The visitors aren’t fooled. And hopefully we will be able to address that imbalance in the 4 kiosks we’ll install at North soon.
Next up is finding content for the roll-out of QR codes planned. Selection criteria for objects were, quite frankly, a best guess when this project was planned. That fudge is coming home to haunt us now as inconsistencies in the collections database make life difficult. But content (always the King) is usually the rub in digital, indeed any, museum project.
So. The Social Interpretation. She goes live today. The first bit anyway. 6 kiosks for visitors to comment on 6 objects. And 8 QR Codes that resolve to shiny new IWM mobile web pages, for the associated objects. It has taken an inordinate amount of time and effort to get this far. But then exhibition things are never straightforward, seamless, unproblematic or, even, easy.
The A Family in Wartime exhibition, housing this phase of the project, almost overwhelms our beloved social interpretation. Objects, paintings, films and blown-up photographs are your first overriding impression. But, really, that is how an exhibition should be.
Of course, children only have eyes for technology, so they spot the kiosks straight away. A young chap (in the image above) wrote, about an evacuee label: “If Daddy sends me away I’ll call Childline.” His Dad actually marched him around the private view of the exhibition and made him comment on each object. He went and commented willingly enough. We should have got his name and given him a job on the project.
There is not much evidence of people engaging with the QR Codes yet though. They are as small (or rather, as large) as we were allowed to make them. Maybe not large enough. Or maybe people just don’t know what they are. We’ll see.
And although a slight pause from #socialinterp might be nice now, we need to crack on with the presence at IWMN, the mobile app, further rollout of codes and our web presence. I could muster up a comment on the lack of time for a project nap. But it might not be very polite.
The following post was written by Simone Ovsey from the MIT Media lab. Simone is the project manager for Media lab and is working closely with all of the team in the Opera of The Future group who are collaborating on the project. We thought it was vital to hear about the project from our digital partners, as up to now only Punchdrunk have spoken about the project on this forum.
For our next post we’re hoping to get a blog from one of the team working on the project on site in NYC.
Media Lab Update
Working with Punchdrunk to realize a new vision for audience interaction and participation is proving to be a most worthwhile and rich experience for our team, led by composer Tod Machover, at the MIT Media Lab. We are excited – and it has been great fun – to be partners in pioneering a new type of live performance that highly personalizes the experience for onsite and online participants and explores original ways of fostering meaningful relationships between these audience members through real-time interaction. Entering into uncharted territory within the world of Web technologies, wireless communication, and multimedia has certainly proven to be a fascinating and ambitious venture.
On the Media Lab end, we are at work integrating technologies that have never before been combined and developing entirely new ones. Pushing the current capabilities of Web standards and wireless communications technologies, we are creating the infrastructure to deliver personalized multimedia content sourced in real time from a central location to allow each online participant to receive a completely unique experience co-created by his or her own actions as well as those of an onsite audience member within the context of the existing Sleep No More experience. Whereas each live visitor to Sleep No More constructs an individual experience based on a single multi-stranded presentation, to make the online experience be equally compelling, we have needed to create an entirely different show for each online “player”, definitely not what we imagined when we started the project! Through custom applications of emerging Internet browser capability, video delivery infrastructure, affective sensing, and Cisco wireless equipment that outfits the McKittrick Hotel, we are able to connect online users with counterparts in the live show to push past the boundaries of virtual collaboration and the dynamics of two people interacting through carefully defined/constructed and mediated methods.
The most surprising aspect of the project so far is how much of an intriguing and creative challenge this work has posed for us on the conceptual level. We’ve discovered that many of the technologies we require to create an experience that successfully integrates an immersive multisensory experience across distance simply do not exist, so we have the exciting opportunity to create them. We are re-imagining the current application of protocols in the realms of video streaming, interactive fiction, and virtual collaboration while building new infrastructures to house our innovative developments. Our team of software and hardware engineers, sound designers, interface specialists, game narrative specialists, and “affective computing” experts has been involved in the entire process of constructing entirely new narratives to add to the current Sleep No More show that can enhance and “explode” meaning for both the onsite and online participants.
Since the start of our collaboration with Punchdrunk at the end of 2011, we have maintained a successful, working relationship with the visionary immersive theater company. Conquering the transatlantic divide, we have found an optimal balance of idea sharing and iteration to ultimately realize our project goals. After months of successful communication via e-mail and video conferencing, our synergy was exemplified in a recent trip to NYC for an intensive day of brainstorming and problem solving with all of the members of the Media Lab and Punchdrunk teams. Once together, we were able to make decisions about the scope and quality of the experience that marked the transition from ideation and development to the production phase of the project. The fluid exchange of creative ideas between both sides has leveraged our broad range of expertise to the fullest in order to co-develop an experience that is completely original in both the technical and theatrical realms. It has also allowed us to proceed through the less-optimal communication channels of e-mail and Skype with renewed vigor and deeper collaborative understanding.
Author: Simone Ovsey
I was really very interested to read this post on the success of going mobile in museums lying in the hands of Visitor Services departments. The usual museum visitor neither knows nor cares about the machinations and politications of getting a project like Social Interpretation off the ground. They don’t much care about budgets, stakeholders, design sign-offs, advisory committees, and mobile phone icon debates. Or any of that stuff. They care, or they remember, their experience in the museum – of which mobile is going to become a more and more common part.
For the SI project, we have chatted, involved and erm, groomed, the IWM Visitor Services department from the start. As the people who will ultimately advocate our products on the museum floor it feels only right that we listen to what the Visitor Service Assistants (VSA’s) have to say and in return let them know as much about what we are doing as is possible and practical and useful.
We have some training sessions for IWM’s VSA staff coming up, just prior to the first Social Interpretation roll out of comment kiosks and QR codes in the A Family in Wartime exhibition from 5 April. The training plan is to let the VSA’s loose on the kiosks and codes themselves – to play, comment, ask us questions. We need to let them know as much as possible for them to be able to tell visitors what they, in turn, can do.
The biggie is how to deal with post-moderated commenting. What if an offensive comment ‘sits’ there for all to see? How to explain that the power is in the visitors’ hands to deal with that comment? And that the comments are the visitors and not the museums’ voice? The VSA’s should also be in a position to help any interested but technologically awkward visitors to use the kiosks to comment. Or how to scan a QR code and what content they might get in return. We are printing postcards to tell visitors about the free Wi-Fi and how to scan QR codes. VSA’s will hopefully hand them out and help people interact with the strange bar-cody things.
And finally VSA’s should know how to put inquiring visitors the way of the disclaimers and T&C’s on the kiosks, that might answer their questions about what is going on with comments and what could happen to theirs, should they choose to take part.
The VSA staff are the front line, public face of the Social Interpretation project and success does indeed lie heavily on them knowing whereof we speak. They’ll also have to field, first hand, questions about what all this socially stuff means. So we’ll keep our ear to the ground and check back with the VSA’s to see how they find it all. Good or bad. Annoying, liberating, frustrating or just a no-brainer of a great idea, and why didn’t museums do it sooner? Fingers (and phone lines) crossed it is the latter.
Thanks to Neil Young for that headline! I’ll be humming this when we are putting the second part of our solution to the reality test tonight, at London’s Barbican where the LSO will be performing the wonderful Brahms Symphony No 2.
Having seen lots of our students target audience download the app, visit the mobile site and purchase tickets successfully (and still more buying via mobile as I write this), this evening will be all about redemption.
Our standalone and custom built mobile ticketing solution consists of the following components:
– the ticket owner’s mobile phone, either loaded with the LSO Pulse iPhone / Android App, or a smartphone with Mobile Internet access
– the mobile ticket, which consists of a number of visible and hidden data fields, and a corresponding QR Code, stored on each mobile device
– a handheld barcode scanner, which is connected via cable to the below notebook
– a notebook computer, which has been stripped to the bare bones to do only one thing, which is to run the Chrome web browser to access the web app below
– the LSO Pulse Ticketing web application, which is browser based and is driven from our KOMOBILITY platform (runs in the “cloud”)
The student coordinator will use this setup to scan all mobile tickets, to check whether a ticket is valid (purchased, not redeemed yet, for this event), and upon success, select the ticket on the web application to have been “collected”. For this phase 1, the coordinator will then hand a paper ticket to the student matching the pre-sold seat information.
In future events, we plan to drop the last “paper” element in this process, however this being a trial and involving a large concert and organisation (Barbican), we want to be safe rather than sorry!
The web application allows for alternative lookup methods, should the scanning of the mobile ticket fail, for example lookup by mobile number, name, and PIN code – obviously just scanning it is much more sexy (and faster), so one of our KPIs tonight will be the % of sold tickets successfully scanned at first attempt.
We have a number of fallbacks in place, including iPads with 3G connectivity should WiFi fail, several mobile handsets iPhone and Android to check for usability queries, and last but not least a list of all tickets, students and seats allocated on paper – if all goes well, that list will not be touched.
As for mobile ticket sales to date, here some stats as to what we have seen to date, and what we therefore expect to see with the students tonight:
- Biggest single transaction was for 8 tickets
- 70% of purchases made by App
- 30% by Mobile Site
- iOS 70%, Android 30% of app downloads
- Average tickets per user = 2.5
- App users have bought 2.7 average, Mob Site 2 per average
- So far 7.7% of ticket buyers have that fact shared via Facebook
We hope to have it almost all worked out and are excited about seeing the original concept complete its first circle. We’ll document it all and whether good, bad or ugly, will post a detailed update here in a few days.