ARTIFICIAL INTELLIGENCE – ARE WE VICTIMS OR BENEFICIARIES?
The guest speaker Mr Anand Srinivasan is the Founder CEO of DSquare Solutions – A boutique analytics startup based in Bangalore. He is a graduate from IIT Madras with a Masters in Operations Research & Statistics from Purdue University. Over a 20 year career, he has been involved in using “Analytics” and “Data Science” across various domains and geographies.
How to make sense of headlines floating around nowadays? Impact of technology - whether you like it or not- you are going to be affected. Some of them are fallacies and some are truth. Medical science has benefited enormously by the new technology. Imaging/evaluation of scans gives diagnosis more accurately than doctors can do. Google search showed China using Artificial Intelligence for facial recognition to incriminate Jay walkers. There are some social dimensions to this technology.
AI / Machine Language impacts broadly two areas.
- Business domain
- Consumer domain
We are impacted by both. If you use your digital assistants like Alexa, then you are applying AI and ML. Same in case of Navigations using Google maps where you are directed to take alternate route to overcome traffic congestion. When you try to talk to someone in a credit card company, you may not know if you are talking to a human or robot. Probably you may be serviced by a chatbot with ML with algorithms put in place. However there are some limitations here. If your query goes out of the domain you are lost.
According to a Harvard review, this modern tool kit of AI could add about 3.5 to 6 trillion dollars in business. Hence lot of investments are being made in these technologies to reap this huge benefit. 20% of businesses use AI and 80% of media covers this. Some businesses are amenable to analytical solutions. 20% of business predominantly use AI in three broad areas - Sales & Marketing, Supply Chain Management (Logistics & Inventory) and Manufacturing focussing on preventive maintenance to avoid possible failures. Imagine the benefits if AI is applied to unaddressed fields other than these three areas. It’s only a matter of time when this will happen.
What AI means in consumer domain? This is a human centric interface where one has to learn how to use a computer. There are five areas of importance. First one is voice input where you don’t type but speak to the computer. It understands and responds. Next is use of Natural language without using any specific keywords. Third one is voice output. Voice output from computer is most preferred compared to display of answers on the screen, especially when you are driving or cooking. The fourth one is intelligent interpretation. The output is related to the location of the person as well as your preferences and other personal details and it’s a complex one. For example if you ask “where can I get a good cup of coffee?” The AI should understand your location and suggest the nearest coffee shop of your taste.
The last one is Agent, which is a more recent development. The AI goes one step forward and translates your desire to action. For example, when you say you want to have a haircut, the Agent fixes the appointment for you even though you had not explicitly asked for it. On the flipside, Alexa recorded a couple’s conversation and e-mailed to everyone in the contact list! So there are some issues in these kind of technologies.
Lot of progress had happened in case of voice inputs, without a need for following certain accents. Also evolutions have happened in the past years on voice outputs. Google map gives voice output in vernacular language as well. Areas of Intelligent Interpretation and Agency are in nascent stage.
In the last eight years there has been an exponential growth in capabilities. However, there is double exponential growth in complexities as well. So at present the capability is lagging behind the needs.
There some challenges in the consumer domain. First, voice based input and output. Voice based input is good as we speak much faster than we type. But in case of voice output (input to us), you have to listen to all the options sequentially as you cannot skip any step. Grasping is much faster while reading & hence display of information is preferred. This is a huge challenge and this is being addressed.
So for input, voice is efficient and for consuming, reading is advantageous.
The other challenge is Trust factor. How reliable is the answer given by the computer? Is it based on consumer rating or payment made by the firms? Next is overload of specialised skills and actions, linked by a keyword. Lots of apps have come in for specific requirements on mobile phones. Heavy customisation for each task becomes too much to handle by anyone. Every company wants to have an app for their products as they want to own the customers. With too many apps it becomes difficult to keep track on the phone.
As the customer base increases, the complexity sets in. The pre requisite is the algorithm developed should be able to process the individual’s request at least in the same efficiency level of a competent human being. Otherwise, it will create issues wherein you have to talk to a human being. In certain complex cases, algorithm alone can handle the queries. For example, in a huge contract, AI can read through and highlight potential risky clauses requiring attention. To handle these type of jobs, expertise is being built in algorithm.
To END it, is important to understand where we are today!
And to understand where we are going tomorrow!!