I had a very interesting meeting with some folks from Microsoft and members of Fuqua’s Health Sector Management program last week. We were exploring how emerging technologies might help with disseminating best practices in clinics in Africa, among other concepts related to improving healthcare on a global scale.

As we were talking about the power of Web 2.0 technologies to allow stakeholders in different countries to connect through the web asynchronously via the power of the participatory web…I was reminded of Andrew McAfee’s great article in Sloan MIT Review where he developed the SLATES components of Enterprise 2.0 technologies: Search, Links, Tags, Extensions, Signals.

    Navigation and Interfaces are moving into the background as SEARCH dominates

    LINKS inform search and empower the prosumers who power the participatory web

    TAGS democratize the categorization of content making the patterns and processes of knowledge work more visible

    EXTENSIONS of tagging allow for services like Stumbleupon to optimize around your search preferences like Pandora does for music

    SIGNALS of new input are fed via RSS.

Our discussion, I thought, was moving in a predictable direction we were where we would begin to examine how to leverage what Michael Wersch has described as the “User Generated” revolution:

    User Generated Commentary (Blogs)

    User Generated Content (YouTube)

    User Generated Filtering (Digg)

    User Generated Organization (Delicious)

    User Generated Distribution (RSS Feeds)

But then, being that we are human, the discussion began to shift. We talked about Microsoft’s Virtual Earth as a 3D scaffold upon which all kinds of digital content can be hung. Think of it as a navigation interface we are already familiar with (the globe) and then think about being able to hang content on this mirror world scaffolding with one click.

Immediately my mind jumped to an awesome TED talk I saw on Microsoft’s Photosynth research project that is now available here.

If you have not seen the Photosynth video yet you should check it out here (Go to 4:32 for demo):

Now imagine that all cameras are location aware and that if you were to map all the pictures Flicker of locations like the Pyramids, White House or Notre Dame onto the Virtual Earth scaffold you essentially have a 3D User Generated Context (UG3DCx) where users (sans avatar representation) can navigate a 3D rendering of the location.

OK, I get the “UG3DCx” part you say…but what is with the “(t)”? Well, those of you who read my blog know I have a keen interest in time travel. So, if the photos taken can be tagged in three dimensions and they also have a standard timestamp, we not only can navigate a spacial representation of a given location we can also do so temporally…and that is very cool. Imagine being able to walk through the rebuilding or the Twin Towers 100 years from now, or more personally navigating thorough your ancestors homes and cities.

It also strikes me that there could be a very cool application here where Peter Gabriel’s effort to enable user generated “watchdogging” using cell phone cameras to capture the human rights injustices in Africa.

User Generated 3D Contexts that allow us to navigate Space and Time. Bring it on I say ; )

Now, just to push it one more notch – or, more accurately, to look at location based contextual exploration from another perspective – imagine really (not virtually as above) being in a physical space and being able to use a device like an i-phone to get contextually relevant data about where you are in real time. Sound impossible? ….like most things these days, it is already…see for yourself below:

This could be a great new tool for enabling immersive location based Alternate Reality Games.