Skyview Media

Technology Never Sleeps: The Future of Big Data and Google Glass

 

Mountain View, CA -- (SBWIRE) -- 09/04/2013 -- Every minute a vast amount of content is uploaded and shared on the Internet, and it can be hard to keep up with the flow of information. Next time you run a Google search, think about the fact that it's just one of 2 million that Google will receive in that minute. In the same amount of time, Facebook users post 684,478 pieces of content. Crazier still, online shoppers spend an average of $272,070 every minute. That's over $391 million every day — quite the chunk of change. As technology advances, there will be new tools developed to aid in searching for information on the web. In the coming years these technologies will continue to advance, but how will technology change the way content is developed and promoted?

Google (among other search engines) is working to make content available to users based on their habits or patterns of searching. The information they gather from what users browse allows them to make an assumption of what a person wants to see, or what he or she may be looking for before it is even searched. This is called anticipatory computing and it allows subject matter to be discovered by the search engine based on its relevancy to the user. Everything a person types into a search tool allows it to learn a little more about that individual. Content discovery is important to the emergence of anticipatory computing. This is what makes search engines so popular, and it is how search engines make money as well.

The human factor is at the front-runner of these new technologies. Developers are trying to make it a more natural experience by incorporating enhanced human-like properties such as voice, touch and gesture into the technology. Nuance –a voice recognition developer, who is responsible for Apple’s Siri, Google Now, and the popular Dragon software series, has been appointed by Expect Labs to work on their unprecedented iPad app, MindMeld. This app can take a conversation of up to eight people and analyzes the contextual information emitted from the interactions of the speakers. The information is used to perform pre-emptive Web searches. Computers may not be mind readers as of yet, but developers can expect to stay very busy as they continue to work on a way to get them there!

Big data or specifically, metadata is vital in making anticipatory computing perform how it is meant to. Google Glass is designed to not only allow users to find, create and share content, but to also “tag” people or places which helps describe an item and allows it to be found again by browsing or searching.Tags are usually chosen informally and personally by the item's creator or by its viewer, depending on the system. This creates metadata sets that will eventually shape the world in the digital space. The line between the real and the virtual will eventually be one big all-in-one virtual system.

Right now, Google Glass is just a pair of weird looking specs that are pretty much only used to take blurry photos and Instagramselfies, and that's okay! The real promise is what comes next. Developers have been wrapping their brains around Google Glass and have begun to see what they can build with it. It is technology that is a catalyst for the future.

Google Glass and the “Google Now” feature incorporate anticipatory computing based upon user information that is collected, as different things are viewed, searched, shared and posted. Glass incorporates what some are calling a fourth dimension with its augmented reality (AR) display, giving content an all-new meaning. Glass will also play a big role in understanding how users will interact, using AR. This could lead to better AR experiences as the content and technology grows and evolves. The possibilities with Glass are limited to your imagination and depending on how well users experience the AR; Glass could soon emerge to the top player when it comes to managing all of the content. Glass users are able to create and share content quickly, and this is what the content producers are looking for. It will allow users to capture many moments to share with people via social networking using a multitude of Glassware apps. When coupled with Glass’s reality system, this emerging user-generated content is going to be like nothing we have seen before. Combined with an app such as Google+ Hangouts, Glass can give us a hands-free video calling tool that makes it easy have a mobile conversation with other people.

In the end content needs users, just like users need content. In order for content to stay relevant today and into the future, the developments in content creation must be evaluated, and things like anticipatory computing, augmented reality, and the integration of apps need to be addressed. In the future, content creators will need to know their audience better, since users will automatically be given the content based on their personal choices. The future of the search world will depend on this.

Media Contact
www.skyviewmedia.com