We need to use AI against AI. How?

By | September 19, 2018

Image courtsey: images.google.com

Let’s start with the low hanging fruit before I get into the domain of using AI for line-of-business.

Machine Learning (ML) and Artificial Intelligence (AI) became absolutely necessary because of big-data. If it were not for ML and AI, most of the social media platforms would have been rendered useless from the get go. Consider a sample of statistics for the most popular social media platforms: Instagram – 95 Million photos per day; Twitter – 500 Million Tweets per day; Snapchat – 3 Billion snaps per day; Facebook – 6 Billion likes / comments per day; WhatsApp – 65 Billion messages per day.

Any developer worth his or her salt would know that this is mind-boggling amount of data that cannot be processed with legacy technologies by throwing more compute power at it. The key challenge is how to parse the unstructured content and make the insights meaningful. Language has a cultural context. The same sentence or a word can have a very different connotation depending upon the user’s location, background and mix of languages. This is further complicated when multi-lingual users mix one or more language; like I use Hinglish (Hindi + English) in my social media chats and discussions. Add to this the context of the post or discussion and any traditional technology will go into an endless loop!

Just as hackers came into existence in parallel to the growth of networked world; fakes, deep fakes, hate-mongers, trolls and other entities with malicious intent continue to grow with the growth of social-media. And, similar to how a counter community of ethical hackers evolved to take on the hackers, we need to develop ethical AI to counter the spread of malign and misuse of the social media platforms. This problem cannot be solved through legislation or breaking up of the social media companies, nor with throwing more human reviewers or moderators in the mix.

We need to use AI against AI.

I’m sure each of the social media companies, along with many others, are trying to solve this problem or are trying to come up with ethical guardrails for their respective platforms. This fragmented approach will not work. It will end up creating multitude of AIs and machine-learning algorithms that will specialize in specific platform and will end up more like applied AI rather than artificial ‘general’ intelligence; which is what’s needed. To build a robust ethical AI the industry needs to come together and create a consortium of sorts to take on the malicious forces and improve social-media credibility.

Whenever designing a solution, a system or even a standalone function (micro service in current lingo); I have always sought inspiration from nature. After all we have this multiverse running on its own, taking care of its business without any oversight! Luckily, for ethical AI we’ll not need to reinvent the universe and can get help from more down-to-earth psychology, statistical analysis and demographic profiling techniques, among others.

Here’s my hypothesis – each computing unit (laptop, server, desktop, cell-phone, etc.) has a profile that’s either associated to an organization, an individual or a combination of both. Similar to how we profile human beings using various data or personality traits, we can create profiles of a computing unit based on what it does or expected to do. Once profiled, then the ethical AI can be developed that continuously learns, adapts and counters malicious forces at their own game.

For deeper discussion on the topic, contact me.


Leave a Reply