Moral imperatives of AI

By | May 15, 2023

In spite of 1000 leaders and researchers begging everyone to pause on AI, it continues to advance at an unprecedented pace. Every week, if not every day, a new tool or enhancements to the existing ones are announced. While the pace of AI cannot be paused, or stopped, the moral imperatives of AI cannot be denied and need to be studied carefully.

By some measures, it appears that with every major advancement in technology the wealth disparity has increased. This, in my view, should be the first and foremost impact of AI that needs to be studied. The more income and wealth disparity increases, the more likelihood of social breakdown, mental health issues, violence and other social ills will grow.

With each shift, the macro-economists typically point to an increase in jobs that are higher paid (with higher qualifications) while the current jobs get automated. Where do we find solace when, with AI, the impact will be felt across almost all of the white-collar higher-paid jobs? Where does it leave the society when, even if one wants to get a better education, the parents cannot afford to pay for it, or the individual is hesitant to take out a loan because there may not be enough jobs? Or, when the advanced education is rendered useless by the time you get your degree?

While the leaders and think-tanks work out a long-term solution, I propose an interim solution that can be tried out. The problem that I am trying to address is that the folks who get displaced by adoption of technology; or, do not get a job because their education and skills cannot be repurposed at a short notice, get some reprieve while they reskill themselves to the new reality.

Every machinery or a software bot (includes AI but not limited to) should be classified as a ‘worker’ because, after all, it is doing the work of one or more humans. This means that the companies should pay federal and state taxes for every machine and bots deployed based on how many human workers it replaces. There can be a standard conversion based on ‘unit of work’ definition (e.g., 1 robotic arm = 3 human workers on a factory floor; or 1 software bot = 5 human workers in customer support). Balayala or other similar models can be used to compute the unit of work and worker. Statutes and limitations can also be established, e.g., a conveyor belt may be excluded but a robotic arm is included; or ERP platform is excluded but a customer support bot on the platform is included.

To offset this (i.e., take deduction against the tax paid), the companies should be allowed to create a ‘Basic Income Fund’ that is used to provide a predetermined universal basic income and health coverage for the human workers it displaces or does not hire because of automation. This fund can be used by displaced workers to reskill for new technologies or ways-of-working. To have a guardrail against misuse, there can be limits to how long the universal basic income will be available for the displaced worker (say, 18-24 months or the duration that is equivalent to the depreciation of assets).

The idea needs much more exploration, especially around governance and implementation details. The intent is to minimize the impact of the disruption caused by the accelerated progress and adoption of new technology. To pause or stop the advancement of technology is not just impossible, but just plain stupid.

In the rapidly evolving landscape of AI, it is crucial to acknowledge and navigate the moral and ethical considerations that arise. By initiating discussions, staying informed about the latest research, and actively participating in shaping the ethical frameworks surrounding AI, we can ensure that this transformative technology is harnessed for the betterment of humanity.

Leave a Reply