“When CRM storyboarding met real time data and decided to have a torrid affair!”
As the world grows faster and more electronic around us, the streams of data being generated can & are being used to mould how applications behave with people. It goes by many names: Recommendation engines, Artificial Intelligence, Behavioural adaptation, Personalisation etc. But what they all boil down to is fluid adaptation to real time data. As you learn, you (or your machine) grow more confident about users and usage patterns and modify the behaviour of your product accordingly. Or you begin with a hypothesis, supported by data, and keep learning and refining your product according to the streams of data coming in.
True artificial intelligence can only be gotten from a deep learning engine that looks at everything. By everything I mean, every single piece of data generated by every single individual, piece of content, application, server, web, location, interaction, sound, image etc. It learns through recognition of patterns. “So, every time the light turns red, this guy stops the car, I must stop at every red light.” “Or that shape corresponds with a human nose, so it must be a nose.” These are simple observations, but they grow more complex as layers and dimensions of data are added to them. Every piece of data is connected together through a complex web of decisions. Its these decisions that the artificial intelligence takes so that the probability of hitting the desired outcome is high. Its mostly automatic if you train the engine right. It needs data to reliably learn and be accurate in its actions. Data that you can design to provide. Tell it what it has to learn, give it direction and watch it burn. A job site can suggest skills in the resume that have the most probability of finding a job. A travel assistant app can reliably predict what you want to order to eat in the morning after a heavy night out in a city away from home. Its all here if you have the numbers.
If you cannot generate mountains of data, you can still personalize, recommend and adapt to user behaviour. This type of adaptation is called real time recommendation. It takes real time data generated by multiple entities related to each other be it users, content, products, etc and adapts against set rules and parameters. The rules, or flags, you have to set beforehand. The disadvantage is that if there’s a new piece of data and it doesn’t recognize it as a ‘pattern’ it doesn’t adapt to it. Like all UX processes, building a recommendation engine is an iterative experience. You as human designers are set on recognizing, learning and analyzing patterns, and growing the capability of the engine to ‘learn’ by adding to it more dimensions & algorithms. The longer a recommendation engine runs, and the more its paid attention to, the more data it gets to ‘feed’ on and form patterns, the more effective it is. Amazon’s shopping recommendation engine, or Netflix’s movie recommender are two such examples of recommenders that have been learning, adapting and growing for some time now. These recommenders work on real time implicit cues like reading, swiping, sharing, buying, adding to cart etc. The classic recommendation system remains in the background, always invisible, yet always working.
The basic draw back in recommendation engines is that the probability of going wrong is very high. Because it has to be told what to take into account and you simply cannot ‘tell’ all the variations in the human psyche. So users see products that they have rejected and abandoned in carts for days around them on Facebook. Or users get recommended news about a star having a fight while reading his obituary. Insensitive as machines can be.🙂
And then there are manual systems, that adapt to visible user input cues with pre-existing patterns. Quint’s new conversational news product is one such, it reacts to cues given by the user like ‘Tell me more’ or ‘Next’ to serve bite sized pieces of news that conform to the user’s choice in a conversational format. Quint works with journalists and linguists to effectively cut up news into smaller, more digestible sections which are direct responses to these human conversational cues. Or you can take the example of a product suggester that starts to show you bohemian dresses when you say its a gift for your ‘dreamy’ girlfriend. Here the choice ‘dreamy’ is mapped to ‘bohemian’ in trends which is mapped to ‘bell sleeves’ ‘chiffon silhouette’ on the product end. This directly relates to the hypothesis that ‘dreamy’ people like ‘bohemian’ clothes. Pre-existing patterns that are set from the background. Its like a CRM on steroids, constantly initiating customer behaviour and serving options, information and actions accordingly.
This approach does not work long term. The more dimensions of data you have, the more complex this decision led system becomes. Once it crosses the two dimension barrier, you eventually have to take your learnings from here to automatic recommendation systems, the manual part of it becoming more and more automatic as time goes along. Some hybrid recommendation systems also take your initial cues manually and use that as the foundation of building a recommendation engine. For example, News Republic asks you what you want to follow, and then recommends news also on related subjects. This simply means that the engine gets a kick-start to its learning process.
So which one should you use for your product? Deep learning & adaptation, a recommendation engine, a hybrid or a manual system that will eventually provide you with the grounds to make a real time recommender. Or jump directly to deep learning from there. Do you have a pre-existing product that has generated enough data to form the basis for an automated recommender?
Ideas and potential abound. But lets take that offline, won’t we?
Twitter Profile: https://twitter.com/devingel
Skype ID: ektajafri
Linked In Profile: https://in.linkedin.com/in/devingel