The First Augmented Reality Experience Ever

--

We all have AR glasses on.

Each one of us is lugging around a really fancy reality filter that is collecting a baffling amount of data, reorganising and enhancing each moment of our existence.

First things first — this post isn’t really about Augmented Reality. So it's okay if AR is just a fancy buzzword you caught somewhere.

Long before the first AR experience or even the idea came into existence, linguists and cognitive scientists were working on understanding the human brain, that according to research, was augmenting our reality, creating its own constructs, colours, tones and entities.

Just to give you a sense of what it means to have this “reality filter”, let us just see the world without it.

It fits meaning to context.

It helps us make sense of sentences like —

“I would like to be the air that inhabits you for a moment only. I would like to be that unnoticed and that necessary.”

One is tempted to think it just helps us understand language. The reality is much more deep rooted. It helps us create symbols from complex entities and manipulate them effortlessly. Look at this sentence from the show Modern Family —

The key is I let Claire think she’s in charge. I hide what I want in something bigger and more expensive, then when she rejects that, we ‘compromise’ on what I wanted all along. I call my method ‘The Trojan Horse.’

Just like that, a symbol is born; a symbol that can be used as a verb (eg. stop trojan horsing me around)

From an AI point of view, this system comes under the problem statement of adding common sense to machines. Machines know shockingly little about our world. Take this sentence for example —

“Mary bent down to look at the dead man’s face, but there was a thick cloth over it.”

To understand the sentence above, the machine needs to know -

  1. Thick clothes are opaque.
  2. Humans cannot see through opaque objects.
  3. When humans die, they are referred by ‘it’.
  4. For the cloth to be over the dead man, it needs to be above the dead man and below Mary.

We parse the sentence above so effortlessly because right from our birth, every second of the day, we have been collecting data about our world, noting down properties of things around us so that we can later come back to them from different perspectives.

That brings us to an interesting question. Given all the data about the world, can machines really understand us?

Our reality is centered around us. We look at things from where we are and accordingly attach meaning to them. Our mind is so tightly coupled to our physical environment and our configuration in it that we fail to realise how much of our thoughts and words would get affected if our configuration was different. Language, like everything else, has evolved with us, gradually adapting to our usage.

So, would just transferring the entire world knowledge to a machine make them understand us?

Probably not. They will understand language a little differently and that just illustrates how rich the construct of a language really is.

How is any of this useful? As it turns out, this is not new. The best presenters understand this, the best sellers appeal to us on this level, the best teachers try to lay out our foundations so that they attach with we already know. Derived from this term is the term “mental models” which can loosely be interpreted as methods to think about things (https://www.farnamstreetblog.com/mental-models/) . Again, the premise is our ability to make symbols out of complicated concepts and look at them from different angles, unearthing new knowledge and insight.

This reality filter is helping us invent, create art, explore the unknowns, resolve ambiguity, establish analogies and so much more. The question is, where do our machines go to get here?

Sources —

  1. The semantics of English prepositions by Andrea Tyler
  2. Metaphors We Live By, Lakoff, Johnson
  3. Illustrations inspired by Wait But Why.

--

--