The decline came faster than anyone would have imagined. The recent reports of ChatGPT getting dumber raise concerns and, more importantly, should give a pause to all who are rushing to adopt AI without doing readiness assessment, feasibility study or AI transformation strategy.
There is a lot of noise and recycled information floating around. In the last one month alone, I have read executive reports and analysis on AI from multiple sources that are more or less the same. I would not be surprised if some of the content was actually generated by ChatGPT! Here’s my attempt to help those who believe that strategy should come before action.
First, I would recommend that we stop calling it ‘artificial intelligence’ and rephrase it as ‘augmented intelligence’. The shift in terminology will bring some sanity into the discussion and also set the correct perspective when enterprises plan to roll out AI. Once the bean counters view this technology as an augmentation to their workforce they will stop focusing on the wrong question – how do we replace people with AI, and start asking the right question – how do we supercharge our workforce with AI?
Technology adoption is not a race to the finish line. It is about creating a sustainable long-term strategic advantage for your organization. While generic use cases of AI that improve your operations will most likely be served by your SaaS provider, the real impact will come from use cases that cut across the enterprise or help you create a differentiation. For example, private LLMs with RAG (Retrieval Augmented Generation) that are fine-tuned to your data.
Integration of AI as co-pilot will dramatically improve the quality of communication and productivity of every individual across the organization with larger impact on non-native English-speaking workforce. The risks are high too. Once people get more reliant on AI to create, wordsmith, paraphrase and edit their content (emails, slides, blogs, etc.) more of it will look the same or similar and create ‘blah’ factor. For technical organizations the risks are even higher. If the code is generated by AI and needs to be enhanced, fixed, or upgraded, you are stuck unless your developers have in depth understanding of what the code does.
While each of the SaaS and ERP providers will tout their AI, the burden of effort and burden of understanding will dramatically increase on the organizations. In my book on Cloud computing, I had posited that as enterprises adopt various cloud solutions, the burden of effort to collate and make sense of their data will increase as the data will be fragmented across multiple systems. The situation gets compounded because now, not only your data but predictions will also be siloed and fragmented.
My recommendation is to use 2023 and, maybe even Q1 2024, to experiment and learn about what AI, LLMs, base models, fine-tuned models, and various other AI tools can do for you. In parallel, start defining your data strategy and identification of use cases that will give you a strategic advantage. Somewhere in Q2 of 2024, start with selecting a tech-stack and use case POCs that can later be expanded to enterprise-wide solutions.
One last word of caution. Before you jump onto offloading decision making to AI, take a moment to read the case study of Volvo where it ended up with more green cars than they started with in their inventory because they failed to have a human-in-the-loop.