This problem is challenging to address because the legal, compliance and regulatory departments are working to keep up with the pace of change. Consequently, controls and policies are often established that either limit the use of AI for IP creation or delay the deployment of use cases until there is a thorough understanding of how the AI functions.
I have seen Cybersecurity teams block entire types or categories of AI because they didn’t have the controls in place to support it. This is extreme. Likewise, I have seen policies established that have said IP would not be protected or progressed if AI were used in the development of that IP. This is because the legal teams may not understand how the generation of vectors or tokens work and therefore feel the AI is “stealing” content from people without citing it or sharing content made specifically for the company. This is a paradigm that needs to be solved.
One way to do this is to accept a level of risk. To do so means putting in levels of validation and testing. For instance, a company that uses Claude or Bedrock from AWS can also process the same problems or requests on OpenAI or Llama. Based on the response from each one, the company can determine if the information was generated or repeated. At the same time, it can use the built-in features of the AI to determine creativity over deterministic responses.
Another way to do this is to make the IP part of a supermind intelligence and classify it as an employee or a tool. As an employee, it would be paid to train and provide answers. The pay would be in the form of opportunity costs and improvements to the ecosystem that supports the AI. If it were a tool, it would be classified as such, regulated as such, and put on a lifecycle of operational support. In my opinion, I would prefer to see it as a new type of employee to help navigate the complexities of reasoning abilities and potential self-awareness.
In the end, legal, regulatory, and compliance can automate the evaluation aspects of the problem, allow the teams to use the AI, and identify the harder problems to solve like Agent ecosystems, neural AI, graph representations of IP, and the development of novel constructs and how to provide the proper controls while allowing them to be used.
From personalised medicine to space-based research, the potential of AI to revolutionise drug development and patient care is boundless. However, navigating this era of rapid innovation requires collaboration, forward-thinking leadership, and the ability to address complex challenges such as intellectual property and ethical considerations.
Christopher Lundy’s vision highlights the opportunities ahead, inspiring stakeholders to harness the full potential of emerging technologies in life sciences. The journey to a smarter, more efficient, and more inclusive healthcare ecosystem is underway—and AI is leading the charge.
Connect with leading experts, discover transformative technologies, and explore career opportunities shaping the future of healthcare at Panda Intelligence. Visit our Linkedin to stay up to date on our exclusive interviews, industry updates, and expert perspectives.