Print Friendly, PDF & Email

RSTV: THE BIG PICTURE- REGULATING AI- ARTIFICIAL INTELLIGENCE

RSTV: THE BIG PICTURE- REGULATING AI- ARTIFICIAL INTELLIGENCE

RSTV

Introduction:

One of the most powerful men in IT, Sundar Pichai, has backed regulations for artificial intelligence. While Pichai isn’t the first big tech executive to say so publicly, his voice matters, given that Google is arguably the world’s largest AI Company. Tesla and SpaceX chief Elon Musk has been vocal about the need for regulating AI several times in the past. Musk even said that “by the time we are reactive in AI regulation, it’s too late”. Microsoft president Brad Smith is another prominent person in tech who has called for regulation of AI. Pichai, in an editorial, advocated for AI to be regulated keeping in mind both the harm and societal benefits that the technology can be used for. He also said that governments must be aligned on regulations around AI for “making global standards work”. While India has been vocal about the use of AI in various sectors, it is far from regulating it. A 2018 NITI Aayog paper proposed five areas where AI can be useful. In that paper, the think tank also noted the lack of regulation around AI as a major weakness for India.

Artificial Intelligence:

  • Artificial intelligence is the branch of computer science concerned with making computers behave like humans.
  • AI refers to the ability of machines to perform cognitive tasks like thinking, perceiving, learning, problem solving and decision making.
  • Quality of Data is the crucial element to the success of AI.

National Strategy for Artificial Intelligence:

  • NITI Aayog unveiled its discussion paper on national strategy on AI which aims to guide research and development in new and emerging technologies.
  • NITI Aayog has identified five sectors — healthcare, agriculture, education, smart cities and infrastructure and transportation — to focus its efforts towards implementation of AI.
  • The paper focuses on how India can leverage the transformative technologies to ensure social and inclusive growth

Opportunities:

  • Advancements in technology over the last couple of decades—computing evolution (cloud, big data, machine learning, etc), falling costs (cheaper data storage) and growing digitalisation.
  • Access to technology easing for the masses.
  • The demand for AI and machine learning specialists in India could rise by 60%.

Benefits from AI:

  • Healthcare: increased access and affordability of quality healthcare.
  • Agriculture: enhanced farmers’ income, increased farm productivity and reduction of wastage.
  • Education: improved access and quality of education.
  • Smart Cities and Infrastructure: efficient and connectivity for the burgeoning urban population.
  • Smart Mobility and Transportation: smarter and safer modes of transportation and better traffic and congestion problems.
  • Energy: In renewable energy systems, AI can enable storage of energy through intelligent grids enabled by smart meters.
  • NITI Aayog estimates that adopting AI means a 15% boost for the gross value added (GVA) for the economy by 2035.
  • Increase efficiency and enhance governance across the government.
  • Helps in improving the ease of doing business, as well as making the lives of people simpler.

AI and Legal framework:

  • AI systems have the capability to learn from experience and to perform autonomously for humans.
  • This also makes AI the most disruptive and self-transformative technology of the 21st century.
  • So, if AI is not regulated properly, it is bound to have unmanageable implications.
  • The consequence if electricity supply suddenly stops while a robot is performing a surgery and access to a doctor is lost
  • These questions have already confronted courts in the U.S. and Germany.
  • No comprehensive legislation to regulate this growing industry has been formulated in India till date.
  • All countries, including India, need to be legally prepared to face such kind of disruptive technology.
  • AI is growing multi-fold and we still do not know all the advantages or pitfalls associated with it which is why it is of utmost importance to have a two-layered protection model: one, technological regulators; and two, laws to control AI actions as well as for accountability of errors.

AI safety can only be achieved by regulating AI:

  • Legally regulating AI can ensure that AI safety becomes an inherent part of any future AI development initiative.
  • This means that every new AI, regardless of its simplicity or complexity, will go through a process of development that immanently focus on minimizing non-compliance and chances of failure.
  • To ensure AI safety, the regulators must consider a few must-have tenets as a part of the legislation. These tenets should include:
    • the non-weaponization of AI technology, and
    • the liability of AI owners, developers, or manufacturers for the actions of their AI systems.
  • Any international agency or government body that sets about regulating AI through legislation should consult with experts in the field of artificial intelligence, ethics and moral sciences, and law and justice.
  • Doing so helps in eliminating any political or personal agenda, biases, and misconceptions while framing the rules for regulating AI research and application. And once framed these regulations should be upheld and enforced strictly.
  • This will ensure that only the applications that comply with the highest of the safety standards are adopted for mainstream use.
  • While regulating AI is necessary, it should not be done in a way that stifles the existing momentum in AI research and development. Thus, the challenge will be to strike a balance between allowing enough freedom to developers to ensure the continued growth of AI research and bringing in more accountability for the makers of AI.
  • While too much regulation can prove to be the enemy of progress, no regulation at all can lead to the propagation of AI systems that can not only halt progress but can potentially lead to destruction and global decline

Concerns:

  • Predicting and analysing legal issues in regards with AI use and their solutions are not that simple
  • There are less than 400 PhDs of AI in India out of total 20,000.
  • AI is trained from data and this data has human biases.
  • The armed forces of US and China have already invested billions of dollars to develop LAWS, intending to gain strategic and tactical advantage over each other. This runs the risks of an arms race. 
  • We donot have access to high grade infrastructure which other countries have and unique
  • AI has to meet the first and foremost challenge of acceptability with the users from the government, public sector and the armed forces, or even the private sector.
    • As users of AI, their interest in the technology augmenting their own ability, and not posing a threat, is quite pertinent.
  • India is only 13th in the quantity and quality of AI research that gets produced.
  • Attacks possible on AI systems.
  • Technical competence in this fast-paced sector, primarily in the case of government, could be a road block.
  • AI can better adapt to the goals and expectations of the Indian decision makers, if the technology development is indigenous. Foreign dependence in this case would be detrimental and unproductive.
  • AI has set off an economic and technological competition, which will further intensify.
  • LAWs operate without human intervention, and there is formidable challenge in distinguishing between combatants and non-combatants, which is a subject of human judgment.

Need of the hour:

  • Ethical norms regarding uses of AI and our ability to regulate them in an intelligent and beneficial manner should keep pace with the fast changing technological capabilities.
  • That is why we need AI researchers to actively involve ethicists in their work.
  • Some of the world’s largest companies like Baidu, Google, Alibaba, Facebook, Tencent, Amazon, Microsoft are cornering the market for AI researchers. They also need to employ ethicists.
  • Additionally, regulators across the world need to be working closely with these academics and citizens’ groups to put brakes on both the harmful uses and effects of AI.
  • For governments to regulate, we need to have clear theories of harms and trade-offs, and that is where researchers really need to make their mark felt: by engaging in public discourse and debate on what AI ethics and regulation should look like.