The next big Augmented Reality startup

oculus rift

Here I am sitting with my good friend, Andrew (34), from NYC who is dealing with datasets, patterns and machine learning in a way that only tech geeks understand. We will introduce the notion of Augmented reality (AR) which is a live view of a physical, real-world environment whose elements are augmented by computer-generated sensory input such as sound, video, graphics or GPS data.

Andrew, what are you doing at the moment?

I look at up what happens when 2 strangers meet and specifically try to look at differences between when they are attracted to each other vs. when they are not. I am trying to come up with ways to predict when that will happen or not. I work with computer scientists who deal with pattern recognition and social sensing. We collect different kinds of data on situations where people are meeting each other for the first time and than we try to use psychology and biology to explain what is going on, and we use computer science as tools to identify patterns. We use tools in pattern recognition and try to support it with machine learning in order to see the differences when people move or behave.

What does it have to do with programming?

We extensively use Python, which is a good programming language and its a very open source community where people share a lot of tools you can use to do advanced things. So there is a computer vision – writing code that computer uses to make sense of video and image data. You can get this package and Python of course for free. And there are free packages you can use to do this and there is a very exciting community of people contributing and building on a top of what is over there.

How do you do it?

We organize speed dating events where single people who we recruit meet each other for the first time and we get to observe their behavior and after each interaction we ask them different questions. We use those ratings to make a guess how attracted people are to each other. Finally, we use GoPro Hero cameras and accelerometers to gather data about their movements. When people are attracted to each other there are things that happen in their body movements that we can look at.


A rig holds 16 GoPro cameras designed for Google Jump during the 2015 Google I/O conference on May 28, 2015 in San Francisco, California. (Photo by Justin Sullivan/Getty Images)

Pattern recognition and machine learning

There is already an old coding system. To train a human being to analyse the data takes a lot of time where the human being has to sit there and look at video data frame by frame. In Skype you have the same visual thing, the rectangle where person is in the middle of it, and we can start to write the code that automatically assesses facial expressions. If we get to that point where we can do it reliably and computer code is as good as a person but it takes less time or doesn’t require a person to be sitting in front of it, then we will get to the point where peace of software can do it in real time and give you a real time feedback. The machine learning part will come in when we have written the code to the point when it can improve itself, and machine learning in pattern recognition are basically complex algorithms.

What is happening with Virtual Reality at the moment?

The world is not quite ready for Google Glasses, it still creeps some people out and there are a lot of privacy concerns. I am very interested in headset similar to Oculus Rift that tracks eye movement. While you are wearing it, it is gathering data specifically about where your eyes are going and we can start learning what the actual eye contact means. One day there will be a device that gives you an idea what is going on with the person you are talking to. The Xbox Kinect is very good at mapping facial features, and before we take the of moving pattern recognition into something like Skype, we can start by analysing facial expressions by something like Kinect.

oculus rift

Pedja testing Oculus Rift at Uprise startup festival

You can change an entire HR industry from the point of recruitment and selection.

Things are happening in your environment, you react to those things and then your brain interprets how you feel about your reaction, which is automatic and you can’t control to some extent. Big part of this is your facial expression since you have more muscles in your face than in the rest of your body which are connected to parts of your brain that react automatically. It will be possible to conduct an interview on Skype where a recruiter can get bunch of data after or during the interview, or a interviews could be conducted by subordinates instead where you can still get a lot of objective data. All this taken together can be synthesized into a piece of software that can give you a picture who this person is. The CV might tell you what they did whereas this can tell you what they are alike.

Virtual reality


Pedja: Using this technology recruiters can identify the extent to which candidates are interested in this job position and passionate about what they do, possibly even if that person doesn’t know that. In near future all the candidates can be invited for an interview if the process is automated since you can’t always judge by someone CV or motivational letter.

How difficult is to develop this kind of software?

We can pull it off with the team I have right now and person that is expert in facial recognition. There are not a lot of computer scientists that are working with psychologists at the moment.

Pedja: There is an unexplored niche market and when the developing starts there is an opportunity of utilising psychology students.

What kind of funding are you looking for?

We are open to anything that will allow us to put together video data about human behavior, computer vision and social sensing. You can develop software tools in one area and use it in another. Therefore, we might get a grant for studying e.g. leadership. When someone makes a speech, what makes everyone in the room stop and pay attention. We are approaching research accelerators that provide you an entire infrastructure and private organizations like Google for Entrepreneurs.

Is there anyone doing the same thing?

We know computer scientists working on this right now but they are not doing facial stuff yet. I heard of somebody of Vienna working specifically with the face but there are probably less than 5 to 10 organizations in the world. Right now the focus is on showing that there are this really broad patterns before we start to tune it down.

Pedja: With affordable Virtual reality toolkits and cardboards everyone can enjoy their own VR world. It allows this kind of software to be used by both business and consumers. Future gaming is also going to change and you will be able to create virtual characters that look and feel a lot more like people than they do now. It becomes more so called “presence” in virtual reality. While you are wearing this goggles, the more real it feels the more immersive the experience is. Moreover, the internet dating could involve 3D models of people showing how they look like and move.

Written by