Does the current IP framework incentivise and support AI use? How can it accommodate the various ways AI will be used? The two-day AI: Decoding IP Conference offered some insights.
Held jointly by the World Intellectual Property Office (WIPO) and the UK Intellectual Property Office (IPO) at London Stadium 18-19 June 2019, the event featured engaging discussions on AI applications, new business models, ownership, liability, entitlement, competition, data access, regulatory incentives and ethical considerations. Here are my 10 takeaways:
1. Machine learning is the next big thing in AI
Machine learning features in more than one-third of all identified inventions and represents 89 per cent of AI filings, according to WIPO’s first in a series of reports called WIPO Technology Trends, published in February 2019. WIPO’s director general Francis Gurry highlighted the report’s key findings in his introductory remarks (for a more detailed analysis of the key findings please see our report on the AI Hub).
The UK’s Minister of State for Universities, Science, Research and Innovation Chris Skidmore MP (whose responsibilities include IP) noted that a similar report was recently published by the UK IPO. Artificial Intelligence – a worldwide overview of AI patents mainly confirms the WIPO’s findings, but emphasises the importance of AI patents first filed in the UK for the global relevance of AI.
Lord David Kitchin JSC not only gave a brilliant summary of the diverse legal issues arising in the AI era, but also agreed that this technology has the potential to drive economic growth.
2. The modern David and Goliath fight is about data
Data-sharing agreements might help to tackle issues arising in relationships between large companies that have a lot of data and smaller companies that have an urgent business need to make use of this data.
Dr Anat Elhalal, head of AI technology at Digital Catapult, introduced Machine Intelligence Garage, a programme that supports early-stage start-ups with access to computation power, expertise and, most recently, resources around responsible AI.
3. Liability might become the downside of AI ownership
Once AI ownership has been established, the question of responsibility/liability immediately follows – but has yet to be comprehensively answered.
Dr Zoë Webster, director of AI and data economy at Innovate UK, and other panellists in the first session on AI applications and new business models collectively wondered whether people would be less keen to claim IP rights if they had a closer look at the liability side.
4. Technology moves fast and legislation lags
Uncertainty over legislation is feeding through into companies’ IP strategy for AI.
A recent IBM Institute for Business Value Survey found 82 per cent of companies are now considering AI adoption for their business, but there is little consensus on how to legally best protect AI-related investments and achievements.
Belinda Gascoyne, senior legal counsel at IBM where she heads the EMEA IP law team, argued that companies need to decide whether they disclose the invention to the public and apply for a patent, or carefully implement confidentiality measures to keep it a trade secret.
With regards to copyright, variation across laws in different jurisdictions can cause difficulties for global companies, which is why there is a current preference for jurisdictions with a less strict approach, such as the US or Israel.
5. Inventors must still be human
Inventors will still either be the owner of an AI system or the user/designer of that system.
As Dr Noam Shemtov, reader in IP and technology law at Queen Mary University of London, shows in his study A study on inventorship in inventions involving AI activity, it is currently not possible to name an AI system or a legal entity as an inventor because the concept of inventorship requires the deployment of human mental faculties.
Whether or not this approach is still justified in the future needs to be assessed more carefully when truly AI-created inventions start becoming reality.
6. AI ethics is in its infancy
Legal and moral responsibilities need to be clarified: this will help manage risks and design ‘trustworthy’ AI.
In the context of AI ethics, Dr Jon Machtynger, Microsoft solutions architect specialising in advanced analytics and artificial intelligence, introduced Microsoft’s principles of AI design:
- treat all people fairly;
- be inclusive and empower and engage people;
- perform reliably and safely;
- be transparent and understandable;
- be secure and respect privacy; and
- have algorithmic accountability.
7. We have yet to balance IP law and competition law
The question is if (and how) authorities can control the design of IP rights to help produce results beneficial to wider society.
Ruth Okediji, Jeremiah Smith Jr. professor of law at Harvard Law School and co-director of the Berkman Klein Center for Internet and Society at Harvard University, asked whether it is advisable to grant an exception to patent and copyright protection in order to avoid infringement allegations relating to the input of training data into the AI system.
8. AI robot judges and examiners are still science fiction
Can AI become a member of a skilled human team?
Dr Ilanah Fhima, reader in IP law at University College London, explained that the EUIPO recently launched an AI image-searching tool that identifies marks with a similar appearance compared with an image uploaded by a user. However, machine-learning models are not yet ready to determine the likelihood of confusion in trademark law.
Sir Robin Jacob, the Sir Hugh Laddie chair of IP law at University College London, also rejected the deployment of machine judges and examiners, but quite entertainingly proposed that AI systems could instead become part of skilled human teams.
In fact, this concept is already established in several patent laws and would result in a higher standard for obviousness, particularly with regard to super-intelligent AI.
9. New does not always mean better
The existing patent system may be able to offer sufficient protection for categories of AI-related inventions: inventions on AI; inventions using AI as a tool; and truly AI-created inventions.
Dr Heli Pihlajamaa, director of the European Patent Office (EPO) in Munich, Germany, made reference to EPO’s guidelines published on patentability requirements. She argued that there is no need to amend the EPO’s current patentability concept to find answers to the four key questions arising in the field of AI-related inventions: determination of inventorship; applicable legal standard for the assessment of patented subject matter; black-box patenting and sufficiency of disclosure; and definition of the person skilled in the art.
10. Collaboration could be the basis for a global approach to AI
Professor Alice Lee, associate dean and professor of law at the University of Hong Kong, talked about the different layers of a potential global divide. She advocated building non-legislative networks, which, she said, could help countries struggling with legislative amendments to keep pace with rapidly changing areas of AI.
Tim Moss, CEO of the UK IPO, closed the conference by claiming that policymakers are keen to continue discussions with the industry on changes to current IP frameworks to address industry concerns prompted by AI developments.
A version of this blog has been first published on the IPKat.
If you are keen to learn more about AI please visit our AI Hub.