Priorities Should Include Education and Research
WASHINGTON — (BUSINESS WIRE) — July 10, 2018 — The following is an opinion editorial provided by Naveen Rao of Intel Corporation.
This press release features multimedia. View the full release here: https://www.businesswire.com/news/home/20180710005881/en/
Naveen Rao was the founder of Nervana and is now corporate vice president and general manager of the Artificial Intelligence Products Group at Intel Corporation. (Photo: Intel Corporation)
Most people agree that artificial intelligence (AI) will transform modern society in positive ways. From autonomous cars that will save thousands of lives, to data analytics programs that may finally discover a cure for cancer, to machines that give voice to those who can’t speak, AI will be known as one of the most revolutionary innovations of mankind.
But this fantastic future is a long way off, and the path to get us there is still under construction. Never before has society undertaken such a significant transformation so deliberately, and no blueprints exist to guide us. Yet one thing is clear: AI is bigger than any one company, industry or country can address on its own. It will take the whole of our technology ecosystem and the world’s governments to realize the full promise of AI.
More: Artificial Intelligence at Intel | Media Alert: Intel at #POLITICOTech: The Government’s Role in Artificial Intelligence
Industry and academia have been actively pursuing this future for quite some time, and early solutions are already having an impact. Government entities have been slower to engage but are now crafting strategies to advance AI and solve some of their biggest challenges. China, India, the United Kingdom, France and the European Union have already come out with formal plans for AI, and this is good. We need more countries to develop AI strategies – especially the U.S.
Ultimately governments, industry and academia should collaborate toward the advancement of AI. An ideal public-private arrangement would apply regulation sparingly while simultaneously fostering innovation and a thriving ecosystem. It’s the kind of arrangement the U.S. is known for, and a key reason that most of the great achievements of the technology industry grew out of U.S.-based companies.
In my role as leader of Intel’s artificial intelligence programs, I am often asked how governments can help AI progress. To that question, I offer three priorities:
Education
Beginning in the elementary grades, school systems must start thinking about their curricula with AI in mind, including development of whole new education tracks. An early example of this is the AI degree program under development at the Australian National University. This first-of-its kind program is being crafted by Senior Intel Fellow and AU computer science professor Genevieve Bell. More is needed. Schools can also take interim steps to better incentivize STEM pathways from an early age. Discounted tuition or accelerated degree programs for data scientists may be one way to produce more of the scientists we badly need to fully realize the benefits of AI.
Then there’s the user side of the AI society. Just as schools used to teach basic typing skills or computer skills, they will need to teach “guided computational” skills so that people who work with machines can successfully interact with them. Because some jobs will most certainly be automated in the AI future, it’s also important to emphasize skills that are uniquely human. Person-to-person interaction will never go away, and those who are good at it will be in high demand.
Research and Development
In order to craft effective public policy, governments should develop an AI perspective. One of the best ways to do this is through nationally funded R&D. Great programs are already underway around algorithmic explicability both in the U.S. and Europe. In the U.K. specifically, government-funded initiatives are addressing the use of AI for early diagnosis of illness, reducing crop disease and delivery of digital services in the public sector. This is good and more is needed.
Governments globally should lean in to develop effective methods for human-AI collaboration and engagement, find ways to ensure the safety and security of AI systems, and develop shared public data sets and environments for AI training and testing. Many of these challenges will be addressed through collaborations between academia, industry and government, with the latter funding more research projects through institutions like the National Science Foundation and the National Institute of Standards and Technology. These efforts would go a long way toward clarifying the regulatory requirements that will be needed in our AI future.
Regulatory Landscape
AI will affect a whole host of laws and regulations. There are dense
thickets of policies around liability, privacy, security and ethics –
all areas where AI could come into play and where thoughtful debate is
needed before laws and regulations are developed. Governments too eager
to proscribe AI in various forms will hinder the advancement of AI.