This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

Freshfields TQ

Technology quotient - the ability of an individual, team or organization to harness the power of technology

| 4 minutes read

When your AI starts to invent: Hot topics in patenting AI-generated inventions

In his 1999 book The Age of Spiritual Machines the renowned futurist Ray Kurzweil predicted that, by the year 2029, computers will routinely be capable of passing the Turing Test: artificial intelligence (AI) will have met and surpassed the level of human intelligence. Kurzweil also predicted that, by 2099, machines will have been granted the same legal rights and freedoms as humans. How prescient these predictions will prove to be remains to be seen.

In this post, we discuss patentability issues. AI is not (yet) truly capable of inventing autonomously, i.e. without some human input. As discussed by our colleague Christopher Stothers  in an earlier post, patent offices like the EPO do not yet recognise AI as patent inventors – that title is still reserved for humans. Patent offices have, however, seen a boom of AI-related patent applications landing in their inboxes. There are many open questions in relation to patents in the AI-context, including in relation to the patentability of the results of AI’s work, i.e. AI-generated or AI-assisted inventions. These (and many other IP issues) are currently up for discussion at WIPO within the framework of consultation on AI and IP policy. Here we discuss two key patentability-related questions.

Inventive step/non-obviousness

Patent protection is available for inventions that involve an inventive step. This is assessed by asking whether a person skilled in the art to which the invention belongs would regard the claimed invention as obvious in light of available prior art. Where the claimed invention is generated by AI, how does that affect the assessment?

For now at least, the “person skilled in the art” (PSA) is deemed to be a human (or team of humans), not a machine, despite having some machine-like characteristics – Jacob LJ’s oft quoted judgment in Rockwater v Technip describes the PSA as “a nerd” but “not a complete android”. However, the PSA may have the assistance of machines. The EPO’s Guidelines for Examination explain that the PSA is presumed to have “at his disposal the means and capacity for routine work and experimentation which are normal for the field of technology in question”. The factual question therefore is whether it was common to use AI in the relevant field at the time of the claimed invention.

In recent times, this may mean there is a gap between what the inventor had at their disposal (which might include, for example, quite advanced AI tools) and what the PSA is deemed to have (for example, only less sophisticated (or no) AI). Does that unfairly skew the system in favour of patent applicants? Or does it amount to a perfectly proper reward for those working at the cutting-edge of research using AI. We suggest the latter – there is no real distinction between this situation and the often-encountered situation of a well-funded laboratory making an invention by utilising equipment that has not yet become routinely available across the field.

This analysis works fine for AI-assisted inventions. But what about AI-generated inventions, in which the AI is fundamental to the core inventive concept? Assuming that patent protection should be permissible for such inventions, how can the traditional obviousness analysis be applied?

While formally the PSA may still be a human having AI at their disposal, the practical reality is likely to be an assessment of whether “commonly-available” AI can achieve the same outcome as the “inventive” AI. If it cannot, the concept of obviousness becomes a difficult one to apply. Is it obvious if the “commonly available” AI could, given time, learn to get to the same solution? How long is too long? To what extent is the need for human intervention (e.g. feeding in new learning datasets, tweaking the algorithm) to achieve the claimed result indicative of obviousness or otherwise? These are difficult questions that we are going to have to grapple with as AI-generated inventions become more commonplace.

Sufficient disclosure

In order to obtain patent protection, the patent specification needs to disclose the claimed invention in sufficient detail to enable the PSA to carry out the invention. If, however, part (or all) of the invention was generated by AI, what exactly should be set out in a patent application in order to meet the sufficiency requirement? In particular, should the AI algorithm be fully disclosed or is it sufficient to mention that AI has been used to carry out certain steps? Is it necessary to disclose the full data set that was fed into AI as it was learning?

The answer will depend on what is actually being claimed. Take, for example, the invention of a specific molecule for pharmaceutical use. AI may have been used to identify the suitable molecule. It is, however, unlikely to be necessary to indicate in the patent application precisely how AI has reached the specific outcome and what data was fed into it. It would be sufficient to identify and describe the molecule and explain why it is plausible that it will work for the intended purposes (though this latter point may involve reliance on the process carried out by the AI).

If, however, reproducing the invention relies on running an AI algorithm as part of a process, this may present a challenge in terms of disclosure. While theoretically it may be possible to print out the full code for the algorithm, list all the data fed into the algorithm, and insert both into a potentially many-hundreds-of-pages-long patent application, this is not a practical solution. A better approach would, for instance, be the introduction of a system for depositing algorithms and datasets, similar to the system for depositing biological material that already exists.

It is also worth considering whether patents are the right form of IP right for protecting AI, AI-related or AI-generated inventions. Provided that, e.g., the source code, the training data and any learning algorithms cannot be reverse-engineered, the AI may be appropriately protected as confidential information / a trade secret. In some circumstances, companies might enjoy more of a competitive advantage from keeping the details of their AI secret than from disclosing those details in exchange for a limited period of protection under a patent.

One is left to wonder whether these issues will still be relevant in the future (even, perhaps, the near future). One can well imagine that if and when AI becomes capable of independent inventive activities, it may also be able to draft perfect patent applications for its inventions in a way that would be compatible with whatever requirements exist. The possibilities are endless.

For more information on the legal issues surrounding AI, click here.

Tags

ai, patents, intellectual property, global