​​Patently Strategic - Patent Strategy for Startups

Patents and AI: Current Tools, Future Solutions

March 20, 2024 Season 4 Episode 3
​​Patently Strategic - Patent Strategy for Startups
Patents and AI: Current Tools, Future Solutions
Show Notes Transcript Chapter Markers

We’re talking about AI and its impact on the patent system.

This month's episode evaluates where we presently are and considers where it could all be heading. Dr. David Jackrel and Dr. Ashley Sloat lead a two-part discussion with our all-star panel that begins with a deep dive on the present state of AI patent tools for searching, proofreading, drafting, and prosecution – and then moves on to an exploration of how these tools could eventually provide solutions for many problems plaguing the industry including PTAB invalidation rates, hindsight bias, prior art search quality, and the unsustainable bar.  Discussion highlights include:

⦿ ChatGPT 4.0 vs. professionals on core competencies
⦿ Why AI is evolving so rapidly
⦿ AI problems and hallucinations
⦿ AI and public disclosure risk
⦿ AI implications for inventorship
⦿ Current state of AI-assisted patent searching, proofreading, drafting (rule and LLM-based), and prosecution tools
⦿ AI's potential future role in the patent system for fixing issues with the PTAB, search quality, and the unsustainable bar

David and Ashley are also joined today by our always exceptional group of experts including:

⦿ Kristen Hansen, Patent Strategy Specialist at Aurora
⦿ Ty Davis, Patent Strategy Associate at Aurora
⦿ Josh Sloat, Chief Everything Else Officer at Aurora

** Mossoff Minute **

In this month's Mossoff Minute, Professor Adam Mossoff discusses the patentability of AI-generated works and inventions.

** Discussed Links **

⦿ USPTO Inventorship Guidance for AI-Assisted Inventions

** Follow Aurora Patents **

⦿ Home: https://www.aurorapatents.com/
⦿ Twitter: https://twitter.com/AuroraPatents
⦿ LinkedIn: https://www.linkedin.com/company/aurora-cg/
⦿ Facebook: https://www.facebook.com/aurorapatents/
⦿ Instagram: https://www.instagram.com/aurorapatents/
⦿ TikTok: https://www.tiktok.com/@aurorapatents
⦿ YouTube: https://www.youtube.com/@aurorapatents/

Thanks for listening! 

---
Note: The contents of this podcast do not constitute legal advice.

[00:00:00] Josh: G'day and welcome to the Patently SPrategic podcast where we discuss all things at the intersection of business, technology, and patents. This podcast is a monthly discussion amongst experts in the field of patenting. It is for inventors, founders, and IP professionals alike, established or aspiring. In today's episode, we're talking about AI and its impact on the patent system.

As society's latest and perhaps most capable change agent, AI is eating the world. Patent practitioners and inventors are far from immune to its effects and it's time we had a chat about this. Our news feeds are overrun with stories of how AI is profoundly reshaping the world at a breakneck pace and achieving results that were only recently the mere dreams, or perhaps threats depending on who you ask, of science fiction.

Experts are asserting that AI's impact will be bigger than the industrial revolution and that we could be less than five years from the singularity or the moment where AI is no longer even under human control. These large language model [00:01:00] neural networks like ChatGPT that predict the next most likely word based on probability are achieving remarkable results in some of the most complex human contexts.

According to a research paper published in December of 2022, OpenAI's ChetGPT passed all three parts of the U. S. Medical Licensing Exam and did so within a comfortable range. OpenAI has been used to create 3D games by prompters with no knowledge of game development and a functioning website from a messy notepad sketch.

GPT 4 can write code in all major programming languages. And this one will probably hit closer to home for some in the audience, so prepare to get uncomfortable if this is news. Not only did GPT 4 pass the bar exam, but it did so scoring in the 90th percentile, passing all sections of the UBE with an accuracy rate of 74.

5%. For context, that's 9. 5 percent higher than average human aspiring attorneys and exceeding Arizona's minimum passing score, which is the highest threshold among the 36 states and jurisdictions using the bar exam. The researchers who [00:02:00] conducted the test concluded that quote Large language models can meet the standard applied to human lawyers in nearly all jurisdictions in the United States by tackling complex tasks requiring deep legal knowledge, reading comprehension, and writing ability.

The pace at which this is happening and the trajectory of where it's going are staggering, happening far more quickly than our own wetware and dated operating systems can possibly comprehend. Core human competencies with tasks like speech recognition, Handwriting recognition, image recognition, reading comprehension, and language understanding were, if not computationally impossible, certainly well below benchmarks of human performance only as recently as the onset of the pandemic.

But thanks to cheaper compute power from model training, increased data availability, the development of deep learning techniques, and the emergence of large scale neural networks like OpenAI's generative pre trained transformer, AI has evolved rapidly, especially in the areas of natural language understanding and generation capabilities.

AI is now surpassing human performance in many [00:03:00] of the human core competencies, and several of the trend lines remain vertical. I mentioned ChatGPT4 passing the bar on the 90th percentile. Its predecessor version, 3. 5, only scored in the 5th percentile. That's one year and a half tick of aversion to go from failing to top of the class.

But for all the hype, it's far from perfect. Beyond highly documented and somewhat hilarious issues with desires to hack computers and break up marriages, AI also presently suffers from a phenomenon known as hallucination. ChadGBT defines hallucination as quote, a situation where an AI system generates outputs or information that is not based on real or existing data, but is instead a product of its own internal processes.

This term is often used to describe instances where an AI system produces content that seems realistic, but is entirely fabricated. For example, generating text that seems coherent and contextually appropriate. But it is entirely fictional. This became a very real world problem for a couple of New York attorneys this past June who [00:04:00] submitted a legal brief that included six fictitious citations generated by Chet GPT.

According to Reuters, the attorneys, who were surprised that, quote, a piece of technology could be making up cases out of whole cloth, quote, were sanctioned by the presiding judge. So while there are still some glitches in the Matrix, most experts agree that it's only a matter of time before it's scary good.

[00:00:00] Josh: G'day and welcome to the Patently Strategic Podcast where we discuss all things at the intersection of business, technology, and patents. This podcast is a monthly discussion amongst experts in the field of patenting. It is for inventors, founders, and IP professionals alike, established or aspiring. In today's episode, we're talking about AI and its impact on the patent system.

As society's latest and perhaps most capable change agent, AI is eating the world. Patent practitioners and inventors are far from immune to its effects and it's time we had a chat about this. Our news feeds are overrun with stories of how AI is profoundly reshaping the world at a breakneck pace and achieving results that were only recently the mere dreams, or perhaps threats depending on who you ask, of science fiction.

Experts are asserting that AI's impact will be bigger than the industrial revolution and that we could be less than five years from the singularity or the moment where AI is no longer even under human control. These large language model [00:01:00] neural networks like ChatGPT that predict the next most likely word based on probability are achieving remarkable results in some of the most complex human contexts.

According to a research paper published in December of 2022, OpenAI's ChetGPT passed all three parts of the U. S. Medical Licensing Exam and did so within a comfortable range. OpenAI has been used to create 3D games by prompters with no knowledge of game development and a functioning website from a messy notepad sketch.

GPT 4 can write code in all major programming languages. And this one will probably hit closer to home for some in the audience, so prepare to get uncomfortable if this is news. Not only did GPT 4 pass the bar exam, but it did so scoring in the 90th percentile, passing all sections of the UBE with an accuracy rate of 74.

5%. For context, that's 9. 5 percent higher than average human aspiring attorneys and exceeding Arizona's minimum passing score, which is the highest threshold among the 36 states and jurisdictions using the bar exam. The researchers who [00:02:00] conducted the test concluded that quote Large language models can meet the standard applied to human lawyers in nearly all jurisdictions in the United States by tackling complex tasks requiring deep legal knowledge, reading comprehension, and writing ability.

The pace at which this is happening and the trajectory of where it's going are staggering, happening far more quickly than our own wetware and dated operating systems can possibly comprehend. Core human competencies with tasks like speech recognition, Handwriting recognition, image recognition, reading comprehension, and language understanding were, if not computationally impossible, certainly well below benchmarks of human performance only as recently as the onset of the pandemic.

But thanks to cheaper compute power from model training, increased data availability, the development of deep learning techniques, and the emergence of large scale neural networks like OpenAI's generative pre trained transformer, AI has evolved rapidly, especially in the areas of natural language understanding and generation capabilities.

AI is now surpassing human performance in many [00:03:00] of the human core competencies, and several of the trend lines remain vertical. I mentioned ChatGPT4 passing the bar on the 90th percentile. Its predecessor version, 3. 5, only scored in the 5th percentile. That's one year and a half tick of aversion to go from failing to top of the class.

But for all the hype, it's far from perfect. Beyond highly documented and somewhat hilarious issues with desires to hack computers and break up marriages, AI also presently suffers from a phenomenon known as hallucination. ChadGBT defines hallucination as quote, a situation where an AI system generates outputs or information that is not based on real or existing data, but is instead a product of its own internal processes.

This term is often used to describe instances where an AI system produces content that seems realistic, but is entirely fabricated. For example, generating text that seems coherent and contextually appropriate. But it is entirely fictional. This became a very real world problem for a couple of New York attorneys this past June who [00:04:00] submitted a legal brief that included six fictitious citations generated by Chet GPT.

According to Reuters, the attorneys, who were surprised that, quote, a piece of technology could be making up cases out of whole cloth, quote, were sanctioned by the presiding judge. So while there are still some glitches in the Matrix, most experts agree that it's only a matter of time before it's scary good.

Geoffrey Hinton, the British computer scientist who conceived of the neural net behind this wave of AI tech, is considered the godfather of AI. In a recent 60 Minutes interview, he said that he believes that in just five years time, AI may well be able to reason better than us, and that we are moving into a period when for the first time ever, we will have things more intelligent than us.

And as Agent Smith would remind us, this is all a sound of inevitability. It's not if, it's when this will all touch our lives and careers in profound ways. And it's already starting, so we're already behind. So that's our mission today. Evaluate where we presently are, and then consider where it could all be heading.

Dr. David Jackrell, President of Jackrell [00:05:00] Consulting, and Dr. Ashley Sloat, President and Director of Patent Strategy here at Aurora, lead a two part discussion with our all star patent panel that begins with a deep dive into the present state of AI patent tools And then moves on to an exploration of how these tools could eventually provide solutions for many problems plaguing the industry, including PTAB invalidation rates, hindsight bias, prior art search quality, and the unsustainable bar.

Dave and Ashley are joined today by our always exceptional group, including Kristen Hanson, Patent Strategy Specialist at Aurora, Ty Davis, Patent Strategy Associate at Aurora. We didn't get to a couple of topics with the panel that I had hoped to discuss. Since they're very likely top of mind for many in this audience, I did want to take just a minute to comment on the confidentiality elephant in the room, as well as AI and inventorship.

One of the biggest questions swirling around the use of AI in the patenting process centers around confidentiality. Generative AI systems like ChatGPT use models that are actively trained with user provided prompts, and some of those models are considered public record. As we've [00:06:00] discussed at great length in prior episodes, public disclosure before getting a patent application on file is a leading cause of patent death.

And public disclosure has been broadly interpreted to mean that if the information is publicly available, whether seen by someone or not The invention is not new, and therefore, unpatentable. Amazon warned its employees to not include confidential information in their prompts because they started seeing answers pop up that closely matched internal materials.

And the New York Times is suing OpenAI and Microsoft for copyright infringement on the grounds that large chunks of near verbatim excerpts from Times articles GPT outputs. So are you at risk of creating your own public disclosure by using CHAT GPT with confidential invention information? Well, it will be impossible to answer this question with a high level of confidence until it's tested by the courts.

And everyone has to make their own decisions, so this isn't advice or a recommendation. But what we're hearing, at least from software vendors and from OpenAI's most recent terms of service, is that the answer depends on how you interface with the language [00:07:00] model. For the public facing ChatGPT, unless you turn off conversation history, your inputs will be retained and used to train models.

When using the API, which would be the case for a lot of professional software tools, ChatGPT Team, or ChatGPT Enterprise, So, OpenAI says they will not train on your business data. Since Section 102A says that prior art must be available to the public, many are saying that this sort of relationship, where the training is not occurring, is akin to using cloud storage, cloud productivity tools, or even hiring draftspeople under confidential allegations to draft figures.

Time will tell, but that's the state of things. Now on to inventorship. The panel does not discuss AI as an inventor. This has largely been settled and has even stood some testing in the courts. AI cannot be a named inventor for a patent any more than it can be a named author for a copyright. At least for now there appears to be broad consensus for these things, but there's obviously some gray area as it pertains to inventor use of AI to assist during the inventive [00:08:00] process.

Which just happens to be something Professor Adam Mossoff tackles with us in this month's Mossoff Minute segment. 

[00:08:08] Adam Mossoff: AI is all the rage right now, and this is just as true for the innovation economy and the creative industries as it is for those using chat GDP and other, uh, large language models. The issue in the context of intellectual property is whether AI generated works and inventions, um, are protectable.

Unfortunately, the copyright office has taken the position that they are not. And this is a really important issue because AI is a tool. It's a tool like any other type of tool that has been invented by humans in the past, such as cameras in the context of creating images for creative works, or typewriters and word processors in the context of, of inventions that have new machines that create.

new types of works as well. The patent office is now trying to start to consider whether, [00:09:00] uh, AI work should be protected or not. We have thankfully set aside this kind of metaphysical science fiction question of whether robots are inventors or creators, which they clearly are not. But the important question is always for the patent office to recognize that these are tools that are used by inventors.

and therefore the works that are produced by them should be protected. And this is particularly important in the context of computer software, where AI is a significantly used tool for the production of new computer programs from 5G to other types of enterprise software systems. More to follow soon.

Thanks 

[00:09:40] Josh: Adam. We're also publishing clips from the Mossoff Minute and short form videos on Instagram Reels, YouTube Shorts, and TikTok. You can check out these shorts and follow us at Aurora Patents on all three platforms. After recording this with Adam, we did see what appears to be good news coming out of the USPTO.

In response to an executive order, the USPTO recently released some guidance that I will link to in the [00:10:00] show notes. The gist is that the USPTO is saying that AI assisted inventions are not categorically unpatentable, but that the focus of inventorship analysis falls on the significance of the human contribution.

In order to be named an inventor, a person needs to make a significant contribution to the ideas and not just reduce them to practice. So if an invention has a human inventor that contributed significantly, then it is patentable. If no human contributed significantly and only computers were the inventors, then the USPTO says that the invention is not patentable.

That still leaves some gray space and room for interpretation, so I'm sure this will be tested, but it seems positive at least that the office is viewing AI like the tool that it is. At least for now. So now without further ado, take it away, Dave and Ashley. 

[00:10:44] Dave: So AI is all over the place, um, including for patents.

Um, and so that's what we're going to talk mainly about today is AI for patents. And, um, AI is, is very powerful. It's improving all the time, but IP. And [00:11:00] patents is complex and nuanced, and also the stakes are high. Um, patents are often yes, very costly to produce, but they're very valuable and they can, uh, you know, millions, if even tens, hundreds of millions can, of dollars can be on the line in a litigation between patents.

So these are high stakes documents. Um, And so, as we've said it, AI is very powerful. AI clearly deserves attention from IP practitioners, but the effective and efficient implementation of these tools is not always straightforward. So, what I'll do in the first half of the talk is, uh, give a, Background summary of the current status of these tools broken down into different areas and in the second half, um, actually we'll take over and talk more about the future and how these tools may be implemented and and we'll have a discussion around those topics.

So [00:12:00] breaking it down in the current status, there are. For tools for different functions within patents. So there's tools for searching proofreading at the patent prosecution stage when the patent is at the being examined by the patent office and helping there's tools to help facilitate that process and then drafting.

And there's a two different sorts of flavors of AI for drafting one, which has a lot more guardrails, sort of the rule based AI and then also the large language model or the chat GPT type type AI for for generative AI drafting. So we'll talk about all of those separately. We're not going to discuss any particular tools today.

We will not endorse or bash any tool or any company, but just to sort of give a list, there's a lot of tools for a, uh, for patents, uh, out there, AI tools for patents, and there's [00:13:00] more. All the time, but just to list sort of a few, uh, there's lots of big companies, Minesoft, Black Hills, Dolcera, PatentBots, who have tools, um, PatSnap.

There's also, um, smaller companies who are, are, are coming out with tools that are specific to patents. You mentioned Patsnap and PatentPal. There's many others. Um, so not going to really talk too specifically, but in general, we'll talk about kind of the different tools that are available in these different areas.

So the first area is IP searching. This is something that it's a necessity, really. Um, whenever you file a patent or when the patent office is examining a patent, the prior art searching the prior art is key. And so, um, you know, what's interesting about an AI enabled search is you can plug in a whole bunch of text into a search field and [00:14:00] it will, the AI will find documents, references that are similar to whatever you plugged in.

So that's really helpful in a various. Situations maybe you have a patent that you want to find a patent that you want to find things similar to you can plug in a whole claim into an AI search box and it that the AI search will find references that are similar to that claim even at an earlier stage if you have what's known like as an invention disclosure maybe you have a write up of your invention but you don't have claims yet you can plug an entire invention disclosure into a search Box, and then the search will find references that are similar to that disclosure.

So it's really interesting. And AI is very good at finding references that Boolean searches may miss a Boolean search would be where you have keywords, um, you know, bicycle and wheel and oval or something like that would be a Boolean search. Whereas if you had a whole [00:15:00] description about we have a bicycle and has oval Wheels and the wheels are good because that makes a very bouncy ride or whatever.

Um, then, you know, you can plug that all into the, uh, AI search and it will, it will find references that are similar, so this can be good and bad. And actually I've, I've spoken with, uh, um, training, training, uh, people that, that are, that are training, uh, uh, at these companies who have these AI search tools that are training people on how to use it.

And some of them have actually said. AI might miss some references that Boolean searches find, and vice versa. The AI search might find some tangential weird reference that's related, but it's totally different keywords that a keyword Boolean search might miss. So it's kind of nice if you want to do a comprehensive search to do to use both the one or the other.

They both have limitations. And so that's 

[00:15:55] Ashley: you. 

[00:15:55] Dave: Yeah, 

[00:15:56] Ashley: have you heard? Um, feedback on any [00:16:00] kind of like, uh, obviousness, inventive step, kind of like how good it is at finding, because I know like PQAI is like a free tool that claims to do it, but in my very brief experience, you know, it's, it's hard for a machine to kind of put those together, but does anybody else have any experience with tools that claim to do those kinds of combinations?

[00:16:22] Dave: I'll say a little bit, I guess, um, and my experience was sort of similar to yours, is that, um, sometimes the AI finds connections that I wouldn't have Expected or thought of from a keyword, but it'll, but it'll, but I've done some AB testing with between Boolean searching and the AI searching where I put in a paragraph and then I took out what I thought were the keywords and did a Boolean search and I thought the AI search in some of those cases, not all, but sometimes it missed some really key references that I thought, you know, were, were top five on the Boolean search from what I thought the keywords were, which is [00:17:00] odd.

But then, like you were saying, it, it. may have, um, um, it may have some potential to, to, to find some of those more, uh, disparate, um, you know, uh, connections for an obviousness reference. But yeah, I have seen it miss, I've seen it miss some things that I thought it should, should find. So yeah, I think, yeah, exactly.

[00:17:25] Kristen: So one of the things I've noticed is when someone, when a company is bragging a little bit about their natural language processor, uh, sophistication, I do find that I get better answers or I get more precise answers, or at least if they're giving me two or three references, I can ask it and get a good answer to the question, how can these three references be combined to cover this claim?

Or, Are these concepts in these three references, right? Like, so it [00:18:00] seems to me if you've got a really good natural language processor, it's patent savvy and patent taught, right? Because we have special rules and really odd things about language. Um, and you can intermix it with somebody who's asking it specific questions.

You might get better answers and, and this is along the same lines as giving a good prompt, right? To get a good answer. 

[00:18:24] Dave: Absolutely. Yes, exactly. And that's going to come up. I mean, again, that's a great, I'm glad you brought that up because that's exactly, I think, at least today, um, crucial for getting good, high quality outputs from these generative AI tools, especially is prompt engineering, like, like asking it sort of more specific questions that you really want the answer to, and then it can provide better, better quality outputs.

Yeah. So there's also some, you know, a lot of. A lot like a lot of [00:19:00] new software innovations. Um, it's hard to predict how they will be used. And, um, there's a lot of interesting, um, uh, applications for AI searching. Um, so for instance, what's important in a patent is that, um, what was known or what is, um, um, Published at the time that that the patent was was filed or at the first priority date.

And so you could have a search tool that could actually go back in time and and ignore sort of everything that's published since and know that only have it be trained on information up until that point. For instance, this is hypothetical. Now I don't know of any tool that Is currently doing that, but that's an interesting application for how some of these, um, software, this AI could, could actually be really helpful to, um, even in, in, in the [00:20:00] parlance of the patents, determine what a person of, of, of skill in the art or fsda, um, note would have known at that time.

Um, and the other thing that you know, comes up again and again is that comprehensive searches. Are very costly and time consuming. And so at the patent office, especially, I think, you know, the U S patent office, a great example of this, the patent examiners are given a certain amount of time to examine a patent.

And they can only really do a certain amount of searching a certain amount of considering, um, before they make it a conclusion about whether it's rejected or allowed. And so, you know, this is, uh, uh, um, something that's often talked about how A lot of issued patents don't hold up, um, in, in, in litigation or even at the P tab.

And I know Ashley will kind of also talk about this in the second half. So we'll dwell there, but [00:21:00] AI searching has the opportunity to address that problem, at least mitigate that problem where it could help a more efficient search tool. And AI may be part of that would, could help a patent examiner find more relevant prior art more quickly, and therefore.

This, um, quality control, if you will, at the patent office could be done during prosecution while the examiner is considering if a claim should be allowable or not, rather than afterwards in litigation when someone has spent tons of money paying people to search for tens or hundreds of hours. Um, so I don't know if anybody has any.

Thoughts about that now? Or we could actually, maybe we should see if that's going to be a discussion point later, we can come back to that.

[00:21:50] Ashley: Yeah, I'm good either way. Yeah, I mean, it is a fair point. So yeah, I have a 

[00:21:54] Kristen: comment. Um, so at litigation or at [00:22:00] post grant is where people are spending all their money on searching and on finding all of these issues, right? With your patent or that you're infringing their patent. So I see this AI kind of movement as a really good tool to help us get better patents to help both sides be better seated with their claims and with their drafts, so that more of this isn't happening as often in litigation and.

And we aren't overstepping each other, right? So if you can, like in the bottom of the slide, it says, you know, use both. Use AI and Boolean. If we can do that up front, if we can do that during prosecution while, you know, we maybe are deciding on continuations, right? And we can throw in other searches so that we aren't having clients waste money.

Um, I think that's powerful. Uh, I understand it can be a slippery slope, but I think when you're using [00:23:00] something mitigated by human intervention that looks at output results that really are just search based, I don't think there's a whole lot of, uh, problem. I think the bigger problems with AI come into the drafting perspective, but for search, I think it's brilliant.

Just an opinion. 

[00:23:19] Dave: Yeah. Yeah. And I've, I've seen examiner, uh, office actions come back where there's a line, you know, the examiners always are very specific about what terms they've used to search, etc. And now I've seen it a couple of times where they've used an AI search tool. 

[00:23:33] Ashley: Oh, interesting. 

[00:23:34] Dave: Hopefully,

[00:23:41] Kristen: yeah, hopefully it makes for better searches, uh, you know, cause not all searches from examiners are great. So if we can improve from that side, we just get better patents. 

[00:23:50] Josh: Yeah, this is one of the things we've, you know, not to get too far ahead. So, you know, we'll talk about a little bit later and actually section, but it was something we talked to judge Michelle about a little bit.

And, you know, he said that that's [00:24:00] something that he and. Uh, you know, Capos is kind of like, you know, kicked around forever is this notion of a gold plated, you know, patent system so that, you know, you like, how do we get something that's closer to a patent that doesn't pendant definitely until it's tested, you know, and in a court.

And so they have sort of like philosophically talked about, you know, what it's sort of separate track, you know, would look like. And, you know, Um, you know what? They've sort of concluded is that it would be way too cost prohibitive for inventors and also for the for the for the PTO and the examination process to do that.

But gosh, what? Like what? If we could get to that sort of resolved by using potentially lower, lower cost a I tools that scaled infinitely better than the human resources that aren't affordable to solve the problem. 

[00:24:46] Dave: Absolutely makes sense. I mean, there's good reasons why the system is the way it is now.

Um, you don't want, you, you want small companies and individual inventors to be able to afford, um, to file a patent and, um, so right. If these two efficiency tools are [00:25:00] the answer, right. To keeping the cost down and having the quality go up. 

[00:25:04] Ashley: Yeah. 

[00:25:04] Dave: Right. Okay. So, um, the next topic is like. We can just breeze right over because this is very, uh, standard.

Is there AI proofreading tools for patents? And these have been used for many years. Um, you know, these are highly detailed documents and the wording really matters. And so, um, for, for years, I know there are tools out there that do, um, antecedent basis checking for claims and checking to make sure that all the terms and the claims have been supported by the SPAC.

So all these proofreading tools are. Very well known and, and, and commonly used and they're in, they are great quality, improving and time saving tools, you know, for efficiency is exactly what a AI is good at. Um, the next topic or area is in patent prosecution. And we started to talk about this a little [00:26:00] bit where, um, examiners are using AI tools for searching.

There are some really interesting tools out there. That, and I think this is a great area for AI, where you could analyze cited prior art, analyze prior art that an examiner has cited in a rejection may only be 234 individual references and compare that to the currently rejected claims and to the specification and then the AI can it.

Help, you know, help the patent practitioner understand the rejections by summarizing or by, you know, comparing and contrasting these things, but there are even tools out there now who are starting to suggest amendments, suggest, um, improvements to the claims to overcome the rejections, um, because this is what AI is good at.

They can analyze the documents. They can see differences. Oh, here are a few embodiments, [00:27:00] a few. Inventive concepts that are in your specification that the AI didn't detect in the prior art. So these could be good candidates for claims to get around this prior art. Um, very interesting area, uh, uh, for AI, because I think it's more bounded.

It's sort of falls in the rule based category. You're not going to have it generatively inventing a bunch of stuff that's not real. It's more like comparing this limited set of information. Does anybody have any, yeah, yeah. 

[00:27:32] Kristen: Yeah. So you would think, um, but the few tools I've used to play with this concept and this exact, uh, set of events, it, you don't always get explicit specification support, um, that would be covered in Europe or that even in the U.

S. sometimes, uh, I've had And not necessarily hallucinate, but grab a word from one part of this back and put it with a word from another, try to [00:28:00] amend, but I do find this process, rather than getting the amendment to say things like, uh, what could, what concepts could I amend into this claim to overcome these, uh, This prior art, right?

And so then you begin to see, well, you know, this isn't covered and that isn't covered. And so then you see these concepts that break out and you say, oh, this is important to my inventor. Let me go back to the specification and see. So I find that easier with this. Um, I don't know, maybe, maybe it just hasn't gone, come far enough, right?

That we're getting exact, explicitly supported amendments in, in proposed amendments. 

[00:28:42] Ashley: I would agree with Chris. I did use one of these to try to, um, get it to draft a rough claim with prompts and it gave me a great claim. But then when I tried to find some of the explicit support, I was like, I was kind of surprised that that was in the specification.

I was like, wow, we had that in [00:29:00] there. 

[00:29:02] Dave: We did not. So it sounds like this. Yeah, it's very similar. And if it's in with these other categories where. It's spitting out something that looks plausible, that's sort of at the face of it, it's like, wow, look at how great a job it is, but when you really dig into the nitty gritty, it's flawed, and um, we'll talk about this in a minute, more for the generative stuff, but it seems to apply here as well.

There was a good quote from an article I saw that was basically saying the current state of large language models and AI, particularly for drafting patents, it's limited and it's going to, the output is going to be similar to that of a patent practitioner working out of their depth, out of their technical depth.

So they didn't really understand the technology they're, they're, they're looking at, they can write. Things that look plausible, but since they don't really understand it, it, it, it's maybe not substantive, it may not actually get [00:30:00] around the rejection, or it may not actually claim anything new, or in this case, may not actually have written description support, even though it sort of looks plausibly like it might.

Yeah. 

[00:30:11] Josh: Or for now. 

[00:30:13] Dave: For now. Exactly. For 

[00:30:15] Josh: now. Right. I was just going to say when, you know, when chat GPT went from version 3. 5 to 4, it went from like barely passing the bar to scoring in the 90th percentile. So like, you know, half tick. Half ticked version and that's where that's where I think like the big asterisk needs to live on a lot of this stuff Is that you know buyer hugely beware right now because this is like leading bleeding edge frontier These capabilities are there but they're like highly untrustworthy But don't get lulled into complacency and thinking it's going to be that way forever.

These things are game changing and they, they're going to improve, uh, at exponential rates that, you know, we're probably not even fully anticipating or ready for. [00:31:00] 

[00:31:00] Dave: I totally agree. Absolutely. And like. This is, it will, we're about to get into the generative stuff and then transition into the discussion about implementation, but in terms of implementation, it's, I mean, any of these tools, there's a learning curve.

There's some investment in time that a patent practitioner has to put into learning a new tool. Yes, but also testing it. You know, these are highly important and, and high stakes documents. You can't just trust a new tool. You have to, you know, spend a lot of time in the beginning, making sure what it outputs is what you think it does, et cetera.

And there's a very valid question I feel about when is the right time to put in that investment and is it really now? And, you know, this is now really getting through into it all. Is that. Depending on what types of applications you typically write as a patent practitioner, that could change which tools are the most valuable for you or save you the most [00:32:00] time, uh, et cetera.

Right. So I, yeah, exactly. I for bringing that up, Josh. Yeah, exactly. Yeah. 

[00:32:07] Ashley: I just want to mention. Go ahead, go ahead. I just kind of had two examples on this just real quick. Um, one was like finding a difference that was very, very like textual base and it was, it was great for that. But then in another example, you know, in two minutes and looking at the figures, you could see what the difference was like that.

And then almost trying to like hand hold it. To get to what you're already assuming it was painful. It's just kind of funny how it is very based on what you're comparing. 

[00:32:43] Kristen: Yeah, and I would say the best use of something like this that is sandboxed that you are not opening up to generation. Um, Is feeding it patent applications or documents and asking it for a summary because those summaries can be used when you discuss things with [00:33:00] examiners when you discuss things with clients or let clients know what the art is about without a whole lot of you digging into that full document right because sometimes you want to give an office action say we We'll come up with a plan, but here's what's cited, here's what's rejected, and here's what that art's about, right?

So, nice topical thing before you really dig in. 

[00:33:24] Dave: An efficiency tool, improving the product, improving our work product, our communication. Um, uh, yeah, yeah. Allowing us to do more with less time and money, right? Um, so there's, to finish up this section, there's two sort of slides now on drafting. And, um, the first one, so they're split up between rule based and machine learning systems like ChatGPT, like these large language models.

And there's a nice I really, I think a nice heuristic to remember this, or what's the difference, is that machine learning systems are probabilistic, whereas [00:34:00] rule based AI models are deterministic. So rule based models are very predictable. You know what they're going to predict. output. They're not going to hallucinate chat GPT.

Large language models are probabilistic. They're working on, well, the next word in this sentence was probably this. And, um, and so therefore you end up getting strange results sometimes. Um, so Just to summarize sort of rule based drafting, you can take a patent specification. Let's say it's got 10 figures in it, and everything's numbered in the spec, and you want to insert a new figure between figure 3 and 4.

Well, instead of renumbering all the figures by hand, If the patent is drafted in a way the AI can understand it, then, um, you can automatically change figure number four to figure number five. And there's even ones that will take, well, maybe figure number four had reference numbers, 410, 424 30. It'll take all those reference numbers and change them to [00:35:00] 5, 10, 5, 25, 30.

There are, uh, tools that do that in the specification and also on the figures themselves. This is all rule based. It's not, um, making things up. It's taking a claim, in some cases, or a description, and outputting a flowchart, let's say, that has all of the elements of a method claim, or outputting, um, you, you can have it out, make up a method.

Element numbers and overlay that on a figure that you give it in the case of a system drawing or there's tools that automatically generate system drawings that are box drawings that can be nested and connected with the lines and things like that. Um, you know, uh, um, And these things can all work vice versa.

You can start with claims and generate detailed description. You can start with detailed description, generate claims. There's all kinds of different tools out there. Um, and, you know, even more simple things, but that are time consuming where you could take a set of claims and have it generate a blank claim.[00:36:00] 

Chart to start that you could fill in as a start of an FTO. And so it might save you a few minutes of formatting, but if you're doing that for a lot of patents, are you doing that regularly? That's annoying. And a few minutes of formatting can be, you know, an add up and be really a time saver. Um, so, you know, rule based systems are great cause there's a low error rate.

Um, sometimes you need to change your drafting style so that it understands it. Uh, in some of the tools I've worked with, for instance, if you say figs two and three, it gets confused. You need to say fig two and fig three, or else it won't be able to deal with that. Small things like that. Um, but there's also a lot of limitations compared to these, um, um, machine learning or, or like large LLM, large language model based tools.

So when might you use a rule based tool when there's a high danger of error and when you need really speedy outputs, things like that? When do you need to go to a machine learning tool like Chats GPT? [00:37:00] Well, when simple guidelines just don't apply. You want it to summarize, as Kristen just mentioned in that example, you want it to summarize a document, you can't really do that with rules, it needs context, it needs to, um, um, have a lot more involved in that, um, and also another interesting thought is pace of change.

It could be done by rules, but those rules are constantly changing, it could be more efficient just to implement something that's more machine learning, um, because it can, it's more adaptable, it adapts on its own. Um, in terms of the LLM based stuff, they're all flavors out there, you know, um, you can take a patent and it can auto generate a title or an abstract.

It can generate, uh, backgrounds from, from prior art references that you feed in, you know, detailed description from claims and vice versa, but not in a rule based sense, but in a, in a, in a probabilistic sense, in a creative generative sense, which. [00:38:00] Has more is more error prone need to do a lot more checking to make sure it didn't hallucinate But it could also come up with really nice language and and ways to phrase things that that you don't have to Develop yourself so it that can be a time saver.

Um This comes back to you know Probably currently, I think the most effective way that these large language models are used is with with a patent practitioner and an expert who is, uh, trained in the subject area training patents and also is a good prompt engineer who knows how to ask it, not just say, here's an invention.

Generate a whole detailed description, but right now I need a description of this aspect of the technology or, you know, uh, where it's sort of almost writing paragraph by paragraph or section by section. I need a paragraph that. describes the advantages [00:39:00] of my technology compared to this prior art, and they can give you a good starting point, and then you're getting the, um, sections and the content that, that, that the patent actually needs.

Um, but there are also tools where you take one sentence and it generates an entire patent, claims, background, so everything. From one sentence, so, um, yeah, they're all different flavors and some of these other things here we've talked about already that, you know, um, LLM generated drafts currently can look like someone working outside their technical field who is out of their depth.

And so an AI generated drafts responses and third party observations can appear responsive, but fail to make any substantive points or claim anything of substance. And this is the, this is the challenge. This is why patent practitioners definitely now, but maybe even. For going forward will always need to be part of the process [00:40:00] because this is just how large language models work.

They work by leveraging known, often published, highly published things and patents by definition are are. Novel and new and are not all the aspects in them are not going to have been described sufficiently elsewhere. So from that aspect, you could imagine that there always needs to be some patent practitioner in the loop.

But, um, as Josh said, these things are getting better all the time. And, um, and, and it's, yeah, who knows what will happen five, 10 years from now, who knows where the technology will be. Um, and so, um, Yeah, I don't know any thoughts on that, right? 

[00:40:43] Josh: Only one thing I wanted to add that I thought was, I thought was interesting.

Um, you're talking earlier about, you know, sort of the high stakes ness of the, of the, of the document, right? And how, how long it's going to live. Ashley's talked to a lot of inventors about this. We kind of did a, you know, [00:41:00] informal thing at a conference. We went to like, Hey, if you could use a tool to drop a patent for you inventor, is that something you would be interested in doing?

Yeah. It's sort of like, you know, cutting the, cutting the agent or, you know, attorney out of the loop and the vast majority overwhelming. Percentage of people we talked to said that for the reasons you stated, the importance of the document, the expense, the how long lasting it has to be, etc. Like they didn't feel comfortable even doing sort of like a computer assisted walkthrough sort of thing.

Like they, they would, they feel like they would always want a professional. In the loop. Um, you know, and that's that's straight from the mouths of inventors cost, you know, cost conscious, uh, you know, folks and stuff. And so, um, you know, for anybody out there who's listening and worried about their job going away in totality, um, probably not gonna happen anytime soon.

Um, maybe not. Maybe not ever. The world's gonna definitely look different. Uh, but, you know, hopefully there will always be a place for for us. 

[00:41:57] Kristen: Yeah, I, you know, I think we will. I think even [00:42:00] if this AI movement takes 10 percent of our job away, it will be the 10 percent that we did not want to work on to slog day in and day out anyway.

So I think it'll be okay. Um, and one thing I wanted to say about, uh, the prompting that David was talking about. If you all remember way back when, when the search engine was fairly new and then we had kind of a second wave of search engines that were even more robust, people had to learn how to use those from a searching standpoint, right?

There's, there's knowledge graphs and algorithms behind the scenes working that the average consumer has no concept about. And so throwing a search at that to get the right answer that you're actually looking for was really difficult in the beginning. And You know, your Googles and your Apples and your Microsofts took a lot of effort and a lot of time trying to figure out context and trying to utilize, uh, keywords put into their [00:43:00] browsers in a certain way so that they could provide those results accurately and better than their competitor, right?

So I think with AI prompts, I think this will move along very similarly and we'll get to that. inventors and drafters and, and just the general public who gets better at creating prompts that work better for them. But there's a good chunk of the population that, you know, and I would say well over 50%, if I'm guessing, that just gives up after it doesn't work right the first time and just doesn't have the patience to stick with it and create these, these, Better ways of using a tool.

So I wonder if AI will fix that problem with society and begin to, to generate the prompt themselves and, you know, give choices and let them as a user decide what they were really looking for when they said X, Y, Z. So we'll see, it might be better than prior [00:44:00] search, search engine usage, but. 

[00:44:03] Josh: But also, I mean, it's a little bit, I mean, I guess it's a little bit out of scope for this conversation, but I think it's highly relevant, you know, points that David made earlier about, you know, sort of different inputs, different outputs, and they're being, they're being an art to prompt engineering.

I think that's one of the biggest pieces that's sort of missed in the conversation around inventorship and authorship when AI is, is involved, right? Is that it, AI is a tool, like my compiler is a tool, like my digital camera is a tool, like Photoshop is a tool. We're all going to use those things and we're going to get very different Results based on our backgrounds, what we put into it and how it's sort of massaged and tailored along along the way.

And so, um, just want to throw that out there that You know, there seems to be like a lot of pushback around the use of AI with regard to inventorship and authorship, but I think that that's a, I [00:45:00] think that's a mistake. I think that that's sort of overanalyzing AI's role relative to every other tool that came before it that we wielded to create, to create things.

[00:45:10] Dave: Yeah, yeah, Josh agree. Absolutely. And Kristen to that that the prompt engineering is really important. And, um, it's a but for a question. And it is simply it's a but for, you know, but for that prompt you put in that invention wouldn't exist. Right? So the person is there. Um, and then one lat, you know, just to sort of build on that and hand and hand it over to Ashley.

In addition to all of the User training and prompt engineering. I think that what we will probably start to see are more specialized tools in certain technical areas. So mechanical, if you're dealing with a lot of mechanical inventions, um, compared to software inventions with a lot of methods, um, uh, or compared with biotech with lots of DNA sequences, like those might all be very different tools that, that [00:46:00] I think you might see AI companies for coming out with particular tools for particular.

Technology areas or workflows that are really could be beneficial that have, you know, specialized. Yes, the prompts are still important, but they have specialized tools behind the scene to help put those, you know, do what those individual applications need. So with that, I will hand it over 

[00:46:31] Ashley: great primer.

And I think, you know, it's actually going to reinforce, um, your career. You know, it's kind of reinforced a lot of things I'm going to say here and kind of reiterate them. So I do think that, um. And kind of, you've already mentioned these, Dave, the three main areas of innovation, except, there we go, for how it's going to impact the patent system are probably around quality, you know, really the PTAB, the Patent Trial and Appeal Board, and that kind of goes towards the quality we were talking [00:47:00] about, the gold plated patent, maybe having better prosecution, um, but.

You know, reducing the role of the PTAB maybe longer term, but also, you know, I think there's another and I'll show a graph about this. There's a shrinking pool. Of early career practitioners, there's fewer people coming into the patent space. And so are we going to be able to fix that with AI as well? Um, so, you know, for PTAB, everybody knows, you know, That it's a huge problem.

You know, I think many, many of you may have heard of the priest Klein hypothesis, right? That in any adversarial institution, the decision rate should be right around 50%. And this is due to the selection effect, right? Where each side is exposed to more and more information at each point. And it gives each side more opportunities to either back out or to settle.

And so most lawsuits don't make it to trial and most disputes don't make it to lawsuits. So by the time you get to the final court decision, you know, these are all the borderline cases that are being decided. And so that's where you're going to end up with that 50 50 mark. And so I think when you look at the PTAB [00:48:00] rates of 84%, you know, that's clearly an institution that's out of balance.

Um, and it's actually, this is not, you know, this rate is not seen in any court system in the United States. Actually, the U. S. court system has roughly a 40 percent invalidation rate, and that's not unusual, but this rate is, you know, kind of bar none in terms of, um, institutions, at least in the U. S. So that's clearly, I think, something that AI might be able to address, and I'll kind of talk through why, and we can speculate how.

Um, and like I said, the early career practitioners, too, there's a big drop off happening in people coming into the patent space. And so that's usually the, you know, a lot of the younger workers that do more of the work that's guided by older professionals. And so if you start having fewer younger people coming into the space, that's going to drive the cost up.

For people to play in the patent system, right to have a practitioner work on their stuff. It's going to be a more senior person. That's going to potentially be more expensive. Um, so that's going to create some issues in the patent system as well. So the goal with AI, of [00:49:00] course, is to kind of bridge this gap, um, to make sure the patent system is as good as it can be moving forward.

And so I think, like I said, quality searching is one of those areas that they've talked a ton about. And I think prosecution is the big, biggest piece. I think litigation and post grant proceedings, you know, what if, um, if we had better prosecution, more art found, all these things earlier. Then the PTAB situation without any political, you know, um, Congress input might actually kind of rectify itself, right.

And get back to that 40, 50 percent where it should be because maybe more art was surfaced earlier in the life cycle of the patent. So it's, it's an interesting, obviously it'd be great if Congress could enact some, you know, Backstops on the P tab, like standing and things like that, but void of that happening, you know, having examiners leverage more AI and actually didn't know that examiners were already doing that Dave.

So that was really interesting to know that you're already seeing some. Do you know if it's like USPTO [00:50:00] sponsored tools or are these tools? Yeah. How are they getting access to these tools? You know what they're. 

[00:50:06] Dave: It's a good question. I'm not exactly sure. I'm sure it's with a usually anyway. These are with partnerships.

So I think it probably is a partnership. Um, and as we know, you know, sometimes even the patent, those searches are outsourced, um, to, uh, to third parties. So, but it's a good question. I'm not sure. 

[00:50:24] Ashley: Yeah, so I think, you know, there's, you know, if anybody has any other comments about quality and searching, but I think that is a huge area where, um, it will improve the patent system and kind of make some of these institutions.

more normalized for how they should behave. Um, and then I love the idea. 

[00:50:42] Dave: Oh, go ahead. I'm hopeful, but I'm also skeptical. Um, you know, um, I think we've all seen sort of good searches from examiners and less, less high, lower quality searches from other examiners. And, you know, um, I, [00:51:00] Maybe, um, that that by by taking the person out of the loop and having of not relying so heavily on the individual examiner pulling out keywords from a claim manually, but rather plugging an entire claim into a search.

Maybe that can improve the quality. But as we've all talked about, um. Prompt engineering and the human in the loop is still so important with where these tools are at now that I'm, I'm skeptical, but I'm hopeful that they'll improve and that it will be able to make at least some impact. 

[00:51:34] Ashley: Maybe there'll be like prompt engineering for examiners, right?

Like do your normal Boolean searching, which are already probably trained in. And then we're going to do some prompt engineering behind and more effectively use these tools to find more art. Right. And, um, Yeah, so machine the same 

[00:51:49] Josh: or or you take prompt engineering out of the loop and the the input is the drafted patent document, right?

And then it's it's like a little bit less user dependent and it's more [00:52:00] algorithmic to just basically say, like, try to invalidate me. 

[00:52:04] Ashley: That'd be really impressive. To get by that, yeah, yeah, that's probably the Holy Grail, right? Like going through examination, you know. What's the, but I mean, it's ultimately down to the claims, right?

The claims are the things that are being invalidated. Right. So you'd still focus on the claims. Um, yeah. And then, you know, love the Fossey, the, you know, point, you know, there is Aubrey, you know, a lot of companies, you know, again, how do you make, you know, court cases and stuff and reduce hindsight bias, which is a huge problem with the P tab is that, oh, you know, it's easy to say that something was easy or something was well known when you're 20 years removed from it or 15 years removed from it.

So I think, you know, how do you use AI to kind of define what was known at any one point in time in history? And there's lots of, you know, companies already out there that are using AI for you to be able to have conversations with dead relatives or historical figures. And so obviously you [00:53:00] can, That person is fixed in time, right?

And so, you know, could you do that to say, well, I want to know what somebody knew in the semiconductor space between 2000 and 2001 and give me, you know, kind of summarize, you know, all of the publication material that was known at that time, you know, in in 5 pages. And could that almost be like a. Not like an expert witness, you know, but an expert testimony or something, um, that goes into the record about what was kind of known.

You can still have your, you know, people, witnesses and things like that and experts. But, you know, does this provide some kind of backstop for, um, hindsight bias? I don't know if anybody has any thoughts. You have, uh, yeah, 

[00:53:41] Dave: you know, I, I, I don't know, um, I don't know the answer to this question, but I wonder if, um, data storage could be sort of an issue.

Um, I know that a lot of the models are constantly being improved and I'm not sure how much data or how easy it is to roll them back and if you [00:54:00] need to save that every day, um, I, I have a feeling it's a, it's a, it's a, uh, a solvable problem, but it may take a dedicated player who really wants to do that and has a value proposition to do it.

[00:54:14] Ashley: One could mark the data for years, right? Like, is that the data actually being marked in a chronological way, right? Like, here's the data. Yeah, and you know, I mean, you don't have 

[00:54:24] Josh: to store the entire data set copy over and over and over again. You just have to store the deltas between like, you know, we've already solved this problem with, um, you know, with code source code repositories and version histories and Google documents.

You know, it's, it's stuff like that. And to take in a plot, take those like same concepts and, you know, apply it to like this, like time traveling, you know, procedure that has access to the entire corpus of human knowledge that's time stamped, right. And can erase the hindsight bias and, you know, the [00:55:00] unringing the, unringing the bell.

Um, there's just, I know that there's like, those are obviously really, really hard problems. But I think it's also incredibly exciting in terms of what could be done, you know, like, we're not gonna, you know, we sort of gonna have to rely on the politicians to solve the eligibility thing. Like, I don't think there's anything I can do with that.

But, you know, when it comes when it comes to obviousness and in prior art, and, you know, even even enablement, I think we can get it. Theoretically, a long, a long way there in terms of more patents that get granted from the office being something closer to a property right, like the title on your home or your vehicle or something else that, you know, you can more safely use.

Reliably predictably build upon without having to worry about the rug getting pulled out from you later on because of hindsight bias or because somebody was able to find some, you know, obscure piece of prior art that [00:56:00] was, you know, overlooked by a tired human whose kids were crying in the background and, you know, needed to get on with.

You know, to the PTO meeting or something like, you know, there's just, there's a lot, like, there's a lot. I mean, it's very, it's very optimistic, but I think there's huge promise there 

[00:56:15] Kristen: for this aspect. You do not need to equate AI to a facet or to any person at all. So for this exact aspect for searching and for kind of.

Unlocking a fixed point in time. You can do this on a factual basis with a list of facts, and this just happens to be a really good computing way to do it right, like a powerful computer that can make these assessments. So as long as those assessments can be proven. These are just facts in the case, right?

Like Microsoft invented the browser at XYZ in time, or I think it was like Netscape Navigator, but, um, you know, Those are just facts and we can look those up and we can verify those so we do not have to personify this at all for this [00:57:00] particular use case right for searching for creating a point in time to say you've used hindsight bias because the first time this was brought up is here the second time was here and you know you used it against me in a way that's not.

Not correct, right? So there's just a list of facts. So we actually do not have to, what is it, anthropomorphize AI in this case. 

[00:57:27] Ashley: Yeah, that's interesting. Yeah, I mean, I think it's, I was meaning it more from just, you know, because that's like the parlance we use, and it's more like, you know, before someone even says, you know, you're using this against me, you know, that's, it wasn't brought up then, you know, could we, is there a way to, in a court proceeding, say, Somehow, like, in this six month time period, this was the general state of things, like, could it, you know, go back and say between January 1st of 2020 and January, or in July [00:58:00] 1st of, you know, 2020, you know, what was, you know, give me a summary of all the stuff that was in that space.

Right. 

[00:58:08] Kristen: And then that can be proven, cross checked, fact checked, and submitted as evidence, as a, an affidavit, or as a declaration, if you want an expert witness to look at that and assess, right? So that can all be submitted in a case now, but this is just a super powerful, easy, and quick way to get that info, right, without having to really do the research.

[00:58:29] Ashley: And then, you know, obviously, you know, to today's point, too, obviously, we can increase practitioner output, right? So on a per practitioner basis, we could be more effective. And so maybe the fact that there's not a lot of younger people being interested in patent law, maybe that doesn't really matter because we can be more effective.

Um, but also, to Dave's point, ton of investment required, like just assessing all the tools, figuring out if it's the right fit. Um, Maybe changing how you draft to accommodate those tools because they are opinionated about how [00:59:00] you use the tools. Um, obviously it has implications for workflow like to Dave's point.

Are you driving drafting mechanical or software or biotech innovations? I think that's going to be hard. I think it has to be age related implications. I think younger practitioners are going to want to pick these tools up easier and they are less Set in their drafting ways, whereas maybe older practitioners, um, or ones that are more seasoned in the field are, you know, they're more set in how they draft and it's gonna be harder to adapt to a, an opinionated AI tool.

Um, I also know that, you know, obviously like things change depending on what client you're doing, what project you're doing. So I think, I think that's hard. And then the implications have a really long tail, you know, a lot of people in the patent for, in the legal field in general, don't change. Something until they absolutely have to because they don't know what the implications are of changing it.

Until it's too late. Right. So a lot of people don't adopt things until they actually have to. And so I think that will be really interesting thing for the legal industry, because I could [01:00:00] see us at a super known quantities, like the rule based stuff, where you're just helping me renumber things. And you're helping me auto generate things that I already wrote.

I could see there being some pushback in the legal industry, just because we don't know how future. Courts or something are going to view these documents and so I think anyway, so that's kind of my spiel on this slide. If anybody, you know, wants to weigh in or have some, have some thoughts on it. 

[01:00:26] Kristen: Okay, don't laugh.

I've just adopted one space after a period, like a few years ago. So I was a double space after. Yeah, just take time that no double space. But honestly, even if there are some age related things and some practitioners who are like, Oh, I'm absolutely not going to use it. I absolutely don't want to. I think there's value in seeing what these tools can do and how you can adapt it into your practice.

Um, even. Even if you don't like what it output, I [01:01:00] guarantee it gave you an idea or a concept you would not have come up with yourself. And certainly not within that amount of time. Hopefully this tool kind of shines on its own. This, this set of AI tools shine on their own and people can see the value, but we'll see.

[01:01:17] Ashley: Yeah, very true. Uh, yeah. So otherwise I think, you know, I think it could be an innovation renaissance potentially, right? If people have. AI capability at their fingertips more and more people do and know how to leverage it or can learn how to leverage it. You know, you could end up with kind of an innovation renaissance, right?

With people kind of thinking more outside the box using AI and, you know, creating new ways of doing things. And then if AI, you know, practitioners are more supercharged with AI, you know, maybe, you know, I know it makes us more efficient, more cost effective. So, you know, give us more companies cash back in their pocket so that they can continue to innovate.

So, I mean, I [01:02:00] think it could be could be an innovation Renaissance. There's obviously like a potential other part of the world where it's not that I didn't really get into today. But in the more hopeful sense, I think. This could be really good. I think it could help us humans think differently about different areas of innovation and maybe do better and more.

Um, but we'll see, like I said, future has not been written. There's no fate. Go ahead. 

[01:02:29] Dave: Um, no, I absolutely agree with that. That that most of what we've been talking about our, um, text text generators and the machine learned that that side of machine learning and I think you could even group AI image generation in there as well.

But there's like a whole other side. Of AI and machine learning, which is like image recognition and voice recognition and, um, big data. And I mean, I know there's like really interesting companies out [01:03:00] there now who are doing virtual experimentation where you can feed, you can do for engineers and scientists in a, in a physical science setting, um, where you can feed it a whole bunch of data from previous experiments, and then it, it, you know, Can you do it's big data thing and predict how future experiments are going to work, which can save huge amounts of time in the lab, make things a lot more efficient.

And so this is, it's already happened in some industries and to some degree, and it's only, I think, going to be happening. I think I absolutely agree with this renaissance or this like real that AI is really going to enable a lot of things to the speed and the pace of a lot of things to to improve. Yeah,

[01:03:47] Ashley: absolutely. Anybody else have anything else to add? So I just, yeah, I think that was a great. So thanks for setting that up. Awesome, Dave. Appreciate it. Yeah, thanks. I was [01:04:00] just curious, um, on your guys thoughts, like, I don't know, how, you know, we're talking about AI and legal and how it's going to shape it, you know, one example I had seen was, um, I think it was a LinkedIn post, but the guy was talking about how he was using, I think it was ChatGPT or something, but to, you know, surf the internet for infringing products based on, Claims and , he was saying that it was working so great.

Uh, but it's now being choked back on, on the legal side, you know, now he tries to do the same thing and it says, you know, you should consult with a practitioner or legal counsel and Oh, funny being, being funny, being choked back. So I don't, I don't know. I think that's kind of an interesting, my guess is that it's like probably those big companies trying to CYAA little bit since you know, all of the.

The copyright stuff happened, you know, who knows if they're not putting some of those safeguards in and other areas of legal where it's like Yeah, because you're kind of, it's kind of liable, right? You're kind of suggesting that a product is [01:05:00] infringing, and if it's not really true, you, it's kind of, you know, it's, yeah.

Anyways, who knows if that's where some of those safeguards are coming in of just, hey, here's some stuff, but, you know, consult somebody. 

[01:05:12] Dave: Yeah, I applaud the OpenAI and those companies for doing that, like early there was, if you asked it to generate references, it would make a list of references that were completely fabricated.

And then I think in an improvement, they said, well, we can't give you individual references, but here are some common points in references that talk about that. So they're, they're trying to make it more accurate, less misleading. Um, right. 

[01:05:41] Ashley: So don't, don't write your appeal brief on it 

[01:05:45] Kristen: and use your fake case law, which is happening.

Um, I think that's the inherent issue that I have with AI. It's not that it's AI and it's powerful and it can do this. I think we can put guardrails on it. I think we can manage [01:06:00] what's going on and we can use the information appropriately. We can't manage people well, right? And what they're going to do, and how they're going to use it, and how they're going to be stupid or, you know, irresponsible.

And that's really tough to not be able to trust your population to, like, take this beautiful tool and use it. Respectfully and appropriately, right? They can't even do it with toys, right? It's like

[01:06:29] Josh: Back in my IT days, we had something that was called a pebcac error. And what that stood for is the problem exists between the chair and the keyboard.

Yep, that problem AI will not fix. I mean, not until it rolls us entirely. 

[01:06:50] Ashley: Yeah, it'll fix us, Mike. You know, all right. Well, we had a good run. Yeah, exactly. We got to go. But awesome. Thank you, Dave. I really [01:07:00] appreciate it. 

[01:07:00] Dave: Yeah. Thank you. 

[01:07:01] Ashley: both. Good 

[01:07:04] Dave: discussion. Yeah, 

[01:07:05] Ashley: good. Bye. All right. Talk to you tomorrow. Bye bye.

[01:07:09] Josh: All right. That's all for today, folks. Thanks for listening, and remember to check us out at aurorapatents. com for more great podcasts, blogs, and videos covering all things patent strategy. And if you're an agent or attorney and would like to be part of the discussion or an inventor with a topic you'd like to hear discussed, email us at podcast at aurorapatents.

com. Do remember that this podcast does not constitute legal advice, and until next time, keep calm and patent on.

Geoffrey Hinton, the British computer scientist who conceived of the neural net behind this wave of AI tech, is considered the godfather of AI. In a recent 60 Minutes interview, he said that he believes that in just five years time, AI may well be able to reason better than us, and that we are moving into a period when for the first time ever, we will have things more intelligent than us.

And as Agent Smith would remind us, this is all a sound of inevitability. It's not if, it's when this will all touch our lives and careers in profound ways. And it's already starting, so we're already behind. So that's our mission today. Evaluate where we presently are, and then consider where it could all be heading.

Dr. David Jackrell, President of Jackrell [00:05:00] Consulting, and Dr. Ashley Sloat, President and Director of Patent Strategy here at Aurora, lead a two part discussion with our all star patent panel that begins with a deep dive into the present state of AI patent tools And then moves on to an exploration of how these tools could eventually provide solutions for many problems plaguing the industry, including PTAB invalidation rates, hindsight bias, prior art search quality, and the unsustainable bar.

Dave and Ashley are joined today by our always exceptional group, including Kristen Hanson, Patent Strategy Specialist at Aurora, Ty Davis, Patent Strategy Associate at Aurora. We didn't get to a couple of topics with the panel that I had hoped to discuss. Since they're very likely top of mind for many in this audience, I did want to take just a minute to comment on the confidentiality elephant in the room, as well as AI and inventorship.

One of the biggest questions swirling around the use of AI in the patenting process centers around confidentiality. Generative AI systems like ChatGPT use models that are actively trained with user provided prompts, and some of those models are considered public record. As we've [00:06:00] discussed at great length in prior episodes, public disclosure before getting a patent application on file is a leading cause of patent death.

And public disclosure has been broadly interpreted to mean that if the information is publicly available, whether seen by someone or not The invention is not new, and therefore, unpatentable. Amazon warned its employees to not include confidential information in their prompts because they started seeing answers pop up that closely matched internal materials.

And the New York Times is suing OpenAI and Microsoft for copyright infringement on the grounds that large chunks of near verbatim excerpts from Times articles GPT outputs. So are you at risk of creating your own public disclosure by using CHAT GPT with confidential invention information? Well, it will be impossible to answer this question with a high level of confidence until it's tested by the courts.

And everyone has to make their own decisions, so this isn't advice or a recommendation. But what we're hearing, at least from software vendors and from OpenAI's most recent terms of service, is that the answer depends on how you interface with the language [00:07:00] model. For the public facing ChatGPT, unless you turn off conversation history, your inputs will be retained and used to train models.

When using the API, which would be the case for a lot of professional software tools, ChatGPT Team, or ChatGPT Enterprise, So, OpenAI says they will not train on your business data. Since Section 102A says that prior art must be available to the public, many are saying that this sort of relationship, where the training is not occurring, is akin to using cloud storage, cloud productivity tools, or even hiring draftspeople under confidential allegations to draft figures.

Time will tell, but that's the state of things. Now on to inventorship. The panel does not discuss AI as an inventor. This has largely been settled and has even stood some testing in the courts. AI cannot be a named inventor for a patent any more than it can be a named author for a copyright. At least for now there appears to be broad consensus for these things, but there's obviously some gray area as it pertains to inventor use of AI to assist during the inventive [00:08:00] process.

Which just happens to be something Professor Adam Mossoff tackles with us in this month's Mossoff Minute segment. 

[00:08:08] Adam Mossoff: AI is all the rage right now, and this is just as true for the innovation economy and the creative industries as it is for those using chat GDP and other, uh, large language models. The issue in the context of intellectual property is whether AI generated works and inventions, um, are protectable.

Unfortunately, the copyright office has taken the position that they are not. And this is a really important issue because AI is a tool. It's a tool like any other type of tool that has been invented by humans in the past, such as cameras in the context of creating images for creative works, or typewriters and word processors in the context of, of inventions that have new machines that create.

new types of works as well. The patent office is now trying to start to consider whether, [00:09:00] uh, AI work should be protected or not. We have thankfully set aside this kind of metaphysical science fiction question of whether robots are inventors or creators, which they clearly are not. But the important question is always for the patent office to recognize that these are tools that are used by inventors.

and therefore the works that are produced by them should be protected. And this is particularly important in the context of computer software, where AI is a significantly used tool for the production of new computer programs from 5G to other types of enterprise software systems. More to follow soon.

Thanks 

[00:09:40] Josh: Adam. We're also publishing clips from the Mossoff Minute and short form videos on Instagram Reels, YouTube Shorts, and TikTok. You can check out these shorts and follow us at Aurora Patents on all three platforms. After recording this with Adam, we did see what appears to be good news coming out of the USPTO.

In response to an executive order, the USPTO recently released some guidance that I will link to in the [00:10:00] show notes. The gist is that the USPTO is saying that AI assisted inventions are not categorically unpatentable, but that the focus of inventorship analysis falls on the significance of the human contribution.

In order to be named an inventor, a person needs to make a significant contribution to the ideas and not just reduce them to practice. So if an invention has a human inventor that contributed significantly, then it is patentable. If no human contributed significantly and only computers were the inventors, then the USPTO says that the invention is not patentable.

That still leaves some gray space and room for interpretation, so I'm sure this will be tested, but it seems positive at least that the office is viewing AI like the tool that it is. At least for now. So now without further ado, take it away, Dave and Ashley. 

[00:10:44] Dave: So AI is all over the place, um, including for patents.

Um, and so that's what we're going to talk mainly about today is AI for patents. And, um, AI is, is very powerful. It's improving all the time, but IP. And [00:11:00] patents is complex and nuanced, and also the stakes are high. Um, patents are often yes, very costly to produce, but they're very valuable and they can, uh, you know, millions, if even tens, hundreds of millions can, of dollars can be on the line in a litigation between patents.

So these are high stakes documents. Um, And so, as we've said it, AI is very powerful. AI clearly deserves attention from IP practitioners, but the effective and efficient implementation of these tools is not always straightforward. So, what I'll do in the first half of the talk is, uh, give a, Background summary of the current status of these tools broken down into different areas and in the second half, um, actually we'll take over and talk more about the future and how these tools may be implemented and and we'll have a discussion around those topics.

So [00:12:00] breaking it down in the current status, there are. For tools for different functions within patents. So there's tools for searching proofreading at the patent prosecution stage when the patent is at the being examined by the patent office and helping there's tools to help facilitate that process and then drafting.

And there's a two different sorts of flavors of AI for drafting one, which has a lot more guardrails, sort of the rule based AI and then also the large language model or the chat GPT type type AI for for generative AI drafting. So we'll talk about all of those separately. We're not going to discuss any particular tools today.

We will not endorse or bash any tool or any company, but just to sort of give a list, there's a lot of tools for a, uh, for patents, uh, out there, AI tools for patents, and there's [00:13:00] more. All the time, but just to list sort of a few, uh, there's lots of big companies, Minesoft, Black Hills, Dolcera, PatentBots, who have tools, um, PatSnap.

There's also, um, smaller companies who are, are, are coming out with tools that are specific to patents. You mentioned Patsnap and PatentPal. There's many others. Um, so not going to really talk too specifically, but in general, we'll talk about kind of the different tools that are available in these different areas.

So the first area is IP searching. This is something that it's a necessity, really. Um, whenever you file a patent or when the patent office is examining a patent, the prior art searching the prior art is key. And so, um, you know, what's interesting about an AI enabled search is you can plug in a whole bunch of text into a search field and [00:14:00] it will, the AI will find documents, references that are similar to whatever you plugged in.

So that's really helpful in a various. Situations maybe you have a patent that you want to find a patent that you want to find things similar to you can plug in a whole claim into an AI search box and it that the AI search will find references that are similar to that claim even at an earlier stage if you have what's known like as an invention disclosure maybe you have a write up of your invention but you don't have claims yet you can plug an entire invention disclosure into a search Box, and then the search will find references that are similar to that disclosure.

So it's really interesting. And AI is very good at finding references that Boolean searches may miss a Boolean search would be where you have keywords, um, you know, bicycle and wheel and oval or something like that would be a Boolean search. Whereas if you had a whole [00:15:00] description about we have a bicycle and has oval Wheels and the wheels are good because that makes a very bouncy ride or whatever.

Um, then, you know, you can plug that all into the, uh, AI search and it will, it will find references that are similar, so this can be good and bad. And actually I've, I've spoken with, uh, um, training, training, uh, people that, that are, that are training, uh, uh, at these companies who have these AI search tools that are training people on how to use it.

And some of them have actually said. AI might miss some references that Boolean searches find, and vice versa. The AI search might find some tangential weird reference that's related, but it's totally different keywords that a keyword Boolean search might miss. So it's kind of nice if you want to do a comprehensive search to do to use both the one or the other.

They both have limitations. And so that's 

[00:15:55] Ashley: you. 

[00:15:55] Dave: Yeah, 

[00:15:56] Ashley: have you heard? Um, feedback on any [00:16:00] kind of like, uh, obviousness, inventive step, kind of like how good it is at finding, because I know like PQAI is like a free tool that claims to do it, but in my very brief experience, you know, it's, it's hard for a machine to kind of put those together, but does anybody else have any experience with tools that claim to do those kinds of combinations?

[00:16:22] Dave: I'll say a little bit, I guess, um, and my experience was sort of similar to yours, is that, um, sometimes the AI finds connections that I wouldn't have Expected or thought of from a keyword, but it'll, but it'll, but I've done some AB testing with between Boolean searching and the AI searching where I put in a paragraph and then I took out what I thought were the keywords and did a Boolean search and I thought the AI search in some of those cases, not all, but sometimes it missed some really key references that I thought, you know, were, were top five on the Boolean search from what I thought the keywords were, which is [00:17:00] odd.

But then, like you were saying, it, it. may have, um, um, it may have some potential to, to, to find some of those more, uh, disparate, um, you know, uh, connections for an obviousness reference. But yeah, I have seen it miss, I've seen it miss some things that I thought it should, should find. So yeah, I think, yeah, exactly.

[00:17:25] Kristen: So one of the things I've noticed is when someone, when a company is bragging a little bit about their natural language processor, uh, sophistication, I do find that I get better answers or I get more precise answers, or at least if they're giving me two or three references, I can ask it and get a good answer to the question, how can these three references be combined to cover this claim?

Or, Are these concepts in these three references, right? Like, so it [00:18:00] seems to me if you've got a really good natural language processor, it's patent savvy and patent taught, right? Because we have special rules and really odd things about language. Um, and you can intermix it with somebody who's asking it specific questions.

You might get better answers and, and this is along the same lines as giving a good prompt, right? To get a good answer. 

[00:18:24] Dave: Absolutely. Yes, exactly. And that's going to come up. I mean, again, that's a great, I'm glad you brought that up because that's exactly, I think, at least today, um, crucial for getting good, high quality outputs from these generative AI tools, especially is prompt engineering, like, like asking it sort of more specific questions that you really want the answer to, and then it can provide better, better quality outputs.

Yeah. So there's also some, you know, a lot of. A lot like a lot of [00:19:00] new software innovations. Um, it's hard to predict how they will be used. And, um, there's a lot of interesting, um, uh, applications for AI searching. Um, so for instance, what's important in a patent is that, um, what was known or what is, um, um, Published at the time that that the patent was was filed or at the first priority date.

And so you could have a search tool that could actually go back in time and and ignore sort of everything that's published since and know that only have it be trained on information up until that point. For instance, this is hypothetical. Now I don't know of any tool that Is currently doing that, but that's an interesting application for how some of these, um, software, this AI could, could actually be really helpful to, um, even in, in, in the [00:20:00] parlance of the patents, determine what a person of, of, of skill in the art or fsda, um, note would have known at that time.

Um, and the other thing that you know, comes up again and again is that comprehensive searches. Are very costly and time consuming. And so at the patent office, especially, I think, you know, the U S patent office, a great example of this, the patent examiners are given a certain amount of time to examine a patent.

And they can only really do a certain amount of searching a certain amount of considering, um, before they make it a conclusion about whether it's rejected or allowed. And so, you know, this is, uh, uh, um, something that's often talked about how A lot of issued patents don't hold up, um, in, in, in litigation or even at the P tab.

And I know Ashley will kind of also talk about this in the second half. So we'll dwell there, but [00:21:00] AI searching has the opportunity to address that problem, at least mitigate that problem where it could help a more efficient search tool. And AI may be part of that would, could help a patent examiner find more relevant prior art more quickly, and therefore.

This, um, quality control, if you will, at the patent office could be done during prosecution while the examiner is considering if a claim should be allowable or not, rather than afterwards in litigation when someone has spent tons of money paying people to search for tens or hundreds of hours. Um, so I don't know if anybody has any.

Thoughts about that now? Or we could actually, maybe we should see if that's going to be a discussion point later, we can come back to that.

[00:21:50] Ashley: Yeah, I'm good either way. Yeah, I mean, it is a fair point. So yeah, I have a 

[00:21:54] Kristen: comment. Um, so at litigation or at [00:22:00] post grant is where people are spending all their money on searching and on finding all of these issues, right? With your patent or that you're infringing their patent. So I see this AI kind of movement as a really good tool to help us get better patents to help both sides be better seated with their claims and with their drafts, so that more of this isn't happening as often in litigation and.

And we aren't overstepping each other, right? So if you can, like in the bottom of the slide, it says, you know, use both. Use AI and Boolean. If we can do that up front, if we can do that during prosecution while, you know, we maybe are deciding on continuations, right? And we can throw in other searches so that we aren't having clients waste money.

Um, I think that's powerful. Uh, I understand it can be a slippery slope, but I think when you're using [00:23:00]something mitigated by human intervention that looks at output results that really are just search based, I don't think there's a whole lot of, uh, problem. I think the bigger problems with AI come into the drafting perspective, but for search, I think it's brilliant.

Just an opinion. 

[00:23:19] Dave: Yeah. Yeah. And I've, I've seen examiner, uh, office actions come back where there's a line, you know, the examiners always are very specific about what terms they've used to search, etc. And now I've seen it a couple of times where they've used an AI search tool. 

[00:23:33] Ashley: Oh, interesting. 

[00:23:34] Dave: Hopefully,

[00:23:41] Kristen: yeah, hopefully it makes for better searches, uh, you know, cause not all searches from examiners are great. So if we can improve from that side, we just get better patents. 

[00:23:50] Josh: Yeah, this is one of the things we've, you know, not to get too far ahead. So, you know, we'll talk about a little bit later and actually section, but it was something we talked to judge Michelle about a little bit.

And, you know, he said that that's [00:24:00] something that he and. Uh, you know, Capos is kind of like, you know, kicked around forever is this notion of a gold plated, you know, patent system so that, you know, you like, how do we get something that's closer to a patent that doesn't pendant definitely until it's tested, you know, and in a court.

And so they have sort of like philosophically talked about, you know, what it's sort of separate track, you know, would look like. And, you know, Um, you know what? They've sort of concluded is that it would be way too cost prohibitive for inventors and also for the for the for the PTO and the examination process to do that.

But gosh, what? Like what? If we could get to that sort of resolved by using potentially lower, lower cost a I tools that scaled infinitely better than the human resources that aren't affordable to solve the problem. 

[00:24:46] Dave: Absolutely makes sense. I mean, there's good reasons why the system is the way it is now.

Um, you don't want, you, you want small companies and individual inventors to be able to afford, um, to file a patent and, um, so right. If these two efficiency tools are [00:25:00] the answer, right. To keeping the cost down and having the quality go up. 

[00:25:04] Ashley: Yeah. 

[00:25:04] Dave: Right. Okay. So, um, the next topic is like. We can just breeze right over because this is very, uh, standard.

Is there AI proofreading tools for patents? And these have been used for many years. Um, you know, these are highly detailed documents and the wording really matters. And so, um, for, for years, I know there are tools out there that do, um, antecedent basis checking for claims and checking to make sure that all the terms and the claims have been supported by the SPAC.

So all these proofreading tools are. Very well known and, and, and commonly used and they're in, they are great quality, improving and time saving tools, you know, for efficiency is exactly what a AI is good at. Um, the next topic or area is in patent prosecution. And we started to talk about this a little [00:26:00] bit where, um, examiners are using AI tools for searching.

There are some really interesting tools out there. That, and I think this is a great area for AI, where you could analyze cited prior art, analyze prior art that an examiner has cited in a rejection may only be 234 individual references and compare that to the currently rejected claims and to the specification and then the AI can it.

Help, you know, help the patent practitioner understand the rejections by summarizing or by, you know, comparing and contrasting these things, but there are even tools out there now who are starting to suggest amendments, suggest, um, improvements to the claims to overcome the rejections, um, because this is what AI is good at.

They can analyze the documents. They can see differences. Oh, here are a few embodiments, [00:27:00] a few. Inventive concepts that are in your specification that the AI didn't detect in the prior art. So these could be good candidates for claims to get around this prior art. Um, very interesting area, uh, uh, for AI, because I think it's more bounded.

It's sort of falls in the rule based category. You're not going to have it generatively inventing a bunch of stuff that's not real. It's more like comparing this limited set of information. Does anybody have any, yeah, yeah. 

[00:27:32] Kristen: Yeah. So you would think, um, but the few tools I've used to play with this concept and this exact, uh, set of events, it, you don't always get explicit specification support, um, that would be covered in Europe or that even in the U.

S. sometimes, uh, I've had And not necessarily hallucinate, but grab a word from one part of this back and put it with a word from another, try to [00:28:00] amend, but I do find this process, rather than getting the amendment to say things like, uh, what could, what concepts could I amend into this claim to overcome these, uh, This prior art, right?

And so then you begin to see, well, you know, this isn't covered and that isn't covered. And so then you see these concepts that break out and you say, oh, this is important to my inventor. Let me go back to the specification and see. So I find that easier with this. Um, I don't know, maybe, maybe it just hasn't gone, come far enough, right?

That we're getting exact, explicitly supported amendments in, in proposed amendments. 

[00:28:42] Ashley: I would agree with Chris. I did use one of these to try to, um, get it to draft a rough claim with prompts and it gave me a great claim. But then when I tried to find some of the explicit support, I was like, I was kind of surprised that that was in the specification.

I was like, wow, we had that in [00:29:00] there. 

[00:29:02] Dave: We did not. So it sounds like this. Yeah, it's very similar. And if it's in with these other categories where. It's spitting out something that looks plausible, that's sort of at the face of it, it's like, wow, look at how great a job it is, but when you really dig into the nitty gritty, it's flawed, and um, we'll talk about this in a minute, more for the generative stuff, but it seems to apply here as well.

There was a good quote from an article I saw that was basically saying the current state of large language models and AI, particularly for drafting patents, it's limited and it's going to, the output is going to be similar to that of a patent practitioner working out of their depth, out of their technical depth.

So they didn't really understand the technology they're, they're, they're looking at, they can write. Things that look plausible, but since they don't really understand it, it, it, it's maybe not substantive, it may not actually get [00:30:00]around the rejection, or it may not actually claim anything new, or in this case, may not actually have written description support, even though it sort of looks plausibly like it might.

Yeah. 

[00:30:11] Josh: Or for now. 

[00:30:13] Dave: For now. Exactly. For 

[00:30:15] Josh: now. Right. I was just going to say when, you know, when chat GPT went from version 3. 5 to 4, it went from like barely passing the bar to scoring in the 90th percentile. So like, you know, half tick. Half ticked version and that's where that's where I think like the big asterisk needs to live on a lot of this stuff Is that you know buyer hugely beware right now because this is like leading bleeding edge frontier These capabilities are there but they're like highly untrustworthy But don't get lulled into complacency and thinking it's going to be that way forever.

These things are game changing and they, they're going to improve, uh, at exponential rates that, you know, we're probably not even fully anticipating or ready for. [00:31:00]

[00:31:00] Dave: I totally agree. Absolutely. And like. This is, it will, we're about to get into the generative stuff and then transition into the discussion about implementation, but in terms of implementation, it's, I mean, any of these tools, there's a learning curve.

There's some investment in time that a patent practitioner has to put into learning a new tool. Yes, but also testing it. You know, these are highly important and, and high stakes documents. You can't just trust a new tool. You have to, you know, spend a lot of time in the beginning, making sure what it outputs is what you think it does, et cetera.

And there's a very valid question I feel about when is the right time to put in that investment and is it really now? And, you know, this is now really getting through into it all. Is that. Depending on what types of applications you typically write as a patent practitioner, that could change which tools are the most valuable for you or save you the most [00:32:00]time, uh, et cetera.

Right. So I, yeah, exactly. I for bringing that up, Josh. Yeah, exactly. Yeah. 

[00:32:07] Ashley: I just want to mention. Go ahead, go ahead. I just kind of had two examples on this just real quick. Um, one was like finding a difference that was very, very like textual base and it was, it was great for that. But then in another example, you know, in two minutes and looking at the figures, you could see what the difference was like that.

And then almost trying to like hand hold it. To get to what you're already assuming it was painful. It's just kind of funny how it is very based on what you're comparing. 

[00:32:43] Kristen: Yeah, and I would say the best use of something like this that is sandboxed that you are not opening up to generation. Um, Is feeding it patent applications or documents and asking it for a summary because those summaries can be used when you discuss things with [00:33:00] examiners when you discuss things with clients or let clients know what the art is about without a whole lot of you digging into that full document right because sometimes you want to give an office action say we We'll come up with a plan, but here's what's cited, here's what's rejected, and here's what that art's about, right?

So, nice topical thing before you really dig in. 

[00:33:24] Dave: An efficiency tool, improving the product, improving our work product, our communication. Um, uh, yeah, yeah. Allowing us to do more with less time and money, right? Um, so there's, to finish up this section, there's two sort of slides now on drafting. And, um, the first one, so they're split up between rule based and machine learning systems like ChatGPT, like these large language models.

And there's a nice I really, I think a nice heuristic to remember this, or what's the difference, is that machine learning systems are probabilistic, whereas [00:34:00] rule based AI models are deterministic. So rule based models are very predictable. You know what they're going to predict. output. They're not going to hallucinate chat GPT.

Large language models are probabilistic. They're working on, well, the next word in this sentence was probably this. And, um, and so therefore you end up getting strange results sometimes. Um, so Just to summarize sort of rule based drafting, you can take a patent specification. Let's say it's got 10 figures in it, and everything's numbered in the spec, and you want to insert a new figure between figure 3 and 4.

Well, instead of renumbering all the figures by hand, If the patent is drafted in a way the AI can understand it, then, um, you can automatically change figure number four to figure number five. And there's even ones that will take, well, maybe figure number four had reference numbers, 410, 424 30. It'll take all those reference numbers and change them to [00:35:00] 5, 10, 5, 25, 30.

There are, uh, tools that do that in the specification and also on the figures themselves. This is all rule based. It's not, um, making things up. It's taking a claim, in some cases, or a description, and outputting a flowchart, let's say, that has all of the elements of a method claim, or outputting, um, you, you can have it out, make up a method.

Element numbers and overlay that on a figure that you give it in the case of a system drawing or there's tools that automatically generate system drawings that are box drawings that can be nested and connected with the lines and things like that. Um, you know, uh, um, And these things can all work vice versa.

You can start with claims and generate detailed description. You can start with detailed description, generate claims. There's all kinds of different tools out there. Um, and, you know, even more simple things, but that are time consuming where you could take a set of claims and have it generate a blank claim.[00:36:00]

Chart to start that you could fill in as a start of an FTO. And so it might save you a few minutes of formatting, but if you're doing that for a lot of patents, are you doing that regularly? That's annoying. And a few minutes of formatting can be, you know, an add up and be really a time saver. Um, so, you know, rule based systems are great cause there's a low error rate.

Um, sometimes you need to change your drafting style so that it understands it. Uh, in some of the tools I've worked with, for instance, if you say figs two and three, it gets confused. You need to say fig two and fig three, or else it won't be able to deal with that. Small things like that. Um, but there's also a lot of limitations compared to these, um, um, machine learning or, or like large LLM, large language model based tools.

So when might you use a rule based tool when there's a high danger of error and when you need really speedy outputs, things like that? When do you need to go to a machine learning tool like Chats GPT? [00:37:00] Well, when simple guidelines just don't apply. You want it to summarize, as Kristen just mentioned in that example, you want it to summarize a document, you can't really do that with rules, it needs context, it needs to, um, um, have a lot more involved in that, um, and also another interesting thought is pace of change.

It could be done by rules, but those rules are constantly changing, it could be more efficient just to implement something that's more machine learning, um, because it can, it's more adaptable, it adapts on its own. Um, in terms of the LLM based stuff, they're all flavors out there, you know, um, you can take a patent and it can auto generate a title or an abstract.

It can generate, uh, backgrounds from, from prior art references that you feed in, you know, detailed description from claims and vice versa, but not in a rule based sense, but in a, in a, in a probabilistic sense, in a creative generative sense, which. [00:38:00] Has more is more error prone need to do a lot more checking to make sure it didn't hallucinate But it could also come up with really nice language and and ways to phrase things that that you don't have to Develop yourself so it that can be a time saver.

Um This comes back to you know Probably currently, I think the most effective way that these large language models are used is with with a patent practitioner and an expert who is, uh, trained in the subject area training patents and also is a good prompt engineer who knows how to ask it, not just say, here's an invention.

Generate a whole detailed description, but right now I need a description of this aspect of the technology or, you know, uh, where it's sort of almost writing paragraph by paragraph or section by section. I need a paragraph that. describes the advantages [00:39:00] of my technology compared to this prior art, and they can give you a good starting point, and then you're getting the, um, sections and the content that, that, that the patent actually needs.

Um, but there are also tools where you take one sentence and it generates an entire patent, claims, background, so everything. From one sentence, so, um, yeah, they're all different flavors and some of these other things here we've talked about already that, you know, um, LLM generated drafts currently can look like someone working outside their technical field who is out of their depth.

And so an AI generated drafts responses and third party observations can appear responsive, but fail to make any substantive points or claim anything of substance. And this is the, this is the challenge. This is why patent practitioners definitely now, but maybe even. For going forward will always need to be part of the process [00:40:00] because this is just how large language models work.

They work by leveraging known, often published, highly published things and patents by definition are are. Novel and new and are not all the aspects in them are not going to have been described sufficiently elsewhere. So from that aspect, you could imagine that there always needs to be some patent practitioner in the loop.

But, um, as Josh said, these things are getting better all the time. And, um, and, and it's, yeah, who knows what will happen five, 10 years from now, who knows where the technology will be. Um, and so, um, Yeah, I don't know any thoughts on that, right? 

[00:40:43] Josh: Only one thing I wanted to add that I thought was, I thought was interesting.

Um, you're talking earlier about, you know, sort of the high stakes ness of the, of the, of the document, right? And how, how long it's going to live. Ashley's talked to a lot of inventors about this. We kind of did a, you know, [00:41:00]informal thing at a conference. We went to like, Hey, if you could use a tool to drop a patent for you inventor, is that something you would be interested in doing?

Yeah. It's sort of like, you know, cutting the, cutting the agent or, you know, attorney out of the loop and the vast majority overwhelming. Percentage of people we talked to said that for the reasons you stated, the importance of the document, the expense, the how long lasting it has to be, etc. Like they didn't feel comfortable even doing sort of like a computer assisted walkthrough sort of thing.

Like they, they would, they feel like they would always want a professional. In the loop. Um, you know, and that's that's straight from the mouths of inventors cost, you know, cost conscious, uh, you know, folks and stuff. And so, um, you know, for anybody out there who's listening and worried about their job going away in totality, um, probably not gonna happen anytime soon.

Um, maybe not. Maybe not ever. The world's gonna definitely look different. Uh, but, you know, hopefully there will always be a place for for us. 

[00:41:57] Kristen: Yeah, I, you know, I think we will. I think even [00:42:00] if this AI movement takes 10 percent of our job away, it will be the 10 percent that we did not want to work on to slog day in and day out anyway.

So I think it'll be okay. Um, and one thing I wanted to say about, uh, the prompting that David was talking about. If you all remember way back when, when the search engine was fairly new and then we had kind of a second wave of search engines that were even more robust, people had to learn how to use those from a searching standpoint, right?

There's, there's knowledge graphs and algorithms behind the scenes working that the average consumer has no concept about. And so throwing a search at that to get the right answer that you're actually looking for was really difficult in the beginning. And You know, your Googles and your Apples and your Microsofts took a lot of effort and a lot of time trying to figure out context and trying to utilize, uh, keywords put into their [00:43:00] browsers in a certain way so that they could provide those results accurately and better than their competitor, right?

So I think with AI prompts, I think this will move along very similarly and we'll get to that. inventors and drafters and, and just the general public who gets better at creating prompts that work better for them. But there's a good chunk of the population that, you know, and I would say well over 50%, if I'm guessing, that just gives up after it doesn't work right the first time and just doesn't have the patience to stick with it and create these, these, Better ways of using a tool.

So I wonder if AI will fix that problem with society and begin to, to generate the prompt themselves and, you know, give choices and let them as a user decide what they were really looking for when they said X, Y, Z. So we'll see, it might be better than prior [00:44:00] search, search engine usage, but. 

[00:44:03] Josh: But also, I mean, it's a little bit, I mean, I guess it's a little bit out of scope for this conversation, but I think it's highly relevant, you know, points that David made earlier about, you know, sort of different inputs, different outputs, and they're being, they're being an art to prompt engineering.

I think that's one of the biggest pieces that's sort of missed in the conversation around inventorship and authorship when AI is, is involved, right? Is that it, AI is a tool, like my compiler is a tool, like my digital camera is a tool, like Photoshop is a tool. We're all going to use those things and we're going to get very different Results based on our backgrounds, what we put into it and how it's sort of massaged and tailored along along the way.

And so, um, just want to throw that out there that You know, there seems to be like a lot of pushback around the use of AI with regard to inventorship and authorship, but I think that that's a, I [00:45:00] think that's a mistake. I think that that's sort of overanalyzing AI's role relative to every other tool that came before it that we wielded to create, to create things.

[00:45:10] Dave: Yeah, yeah, Josh agree. Absolutely. And Kristen to that that the prompt engineering is really important. And, um, it's a but for a question. And it is simply it's a but for, you know, but for that prompt you put in that invention wouldn't exist. Right? So the person is there. Um, and then one lat, you know, just to sort of build on that and hand and hand it over to Ashley.

In addition to all of the User training and prompt engineering. I think that what we will probably start to see are more specialized tools in certain technical areas. So mechanical, if you're dealing with a lot of mechanical inventions, um, compared to software inventions with a lot of methods, um, uh, or compared with biotech with lots of DNA sequences, like those might all be very different tools that, that [00:46:00] I think you might see AI companies for coming out with particular tools for particular.

Technology areas or workflows that are really could be beneficial that have, you know, specialized. Yes, the prompts are still important, but they have specialized tools behind the scene to help put those, you know, do what those individual applications need. So with that, I will hand it over 

[00:46:31] Ashley: great primer.

And I think, you know, it's actually going to reinforce, um, your career. You know, it's kind of reinforced a lot of things I'm going to say here and kind of reiterate them. So I do think that, um. And kind of, you've already mentioned these, Dave, the three main areas of innovation, except, there we go, for how it's going to impact the patent system are probably around quality, you know, really the PTAB, the Patent Trial and Appeal Board, and that kind of goes towards the quality we were talking [00:47:00] about, the gold plated patent, maybe having better prosecution, um, but.

You know, reducing the role of the PTAB maybe longer term, but also, you know, I think there's another and I'll show a graph about this. There's a shrinking pool. Of early career practitioners, there's fewer people coming into the patent space. And so are we going to be able to fix that with AI as well? Um, so, you know, for PTAB, everybody knows, you know, That it's a huge problem.

You know, I think many, many of you may have heard of the priest Klein hypothesis, right? That in any adversarial institution, the decision rate should be right around 50%. And this is due to the selection effect, right? Where each side is exposed to more and more information at each point. And it gives each side more opportunities to either back out or to settle.

And so most lawsuits don't make it to trial and most disputes don't make it to lawsuits. So by the time you get to the final court decision, you know, these are all the borderline cases that are being decided. And so that's where you're going to end up with that 50 50 mark. And so I think when you look at the PTAB [00:48:00] rates of 84%, you know, that's clearly an institution that's out of balance.

Um, and it's actually, this is not, you know, this rate is not seen in any court system in the United States. Actually, the U. S. court system has roughly a 40 percent invalidation rate, and that's not unusual, but this rate is, you know, kind of bar none in terms of, um, institutions, at least in the U. S. So that's clearly, I think, something that AI might be able to address, and I'll kind of talk through why, and we can speculate how.

Um, and like I said, the early career practitioners, too, there's a big drop off happening in people coming into the patent space. And so that's usually the, you know, a lot of the younger workers that do more of the work that's guided by older professionals. And so if you start having fewer younger people coming into the space, that's going to drive the cost up.

For people to play in the patent system, right to have a practitioner work on their stuff. It's going to be a more senior person. That's going to potentially be more expensive. Um, so that's going to create some issues in the patent system as well. So the goal with AI, of [00:49:00] course, is to kind of bridge this gap, um, to make sure the patent system is as good as it can be moving forward.

And so I think, like I said, quality searching is one of those areas that they've talked a ton about. And I think prosecution is the big, biggest piece. I think litigation and post grant proceedings, you know, what if, um, if we had better prosecution, more art found, all these things earlier. Then the PTAB situation without any political, you know, um, Congress input might actually kind of rectify itself, right.

And get back to that 40, 50 percent where it should be because maybe more art was surfaced earlier in the life cycle of the patent. So it's, it's an interesting, obviously it'd be great if Congress could enact some, you know, Backstops on the P tab, like standing and things like that, but void of that happening, you know, having examiners leverage more AI and actually didn't know that examiners were already doing that Dave.

So that was really interesting to know that you're already seeing some. Do you know if it's like USPTO [00:50:00]sponsored tools or are these tools? Yeah. How are they getting access to these tools? You know what they're. 

[00:50:06] Dave: It's a good question. I'm not exactly sure. I'm sure it's with a usually anyway. These are with partnerships.

So I think it probably is a partnership. Um, and as we know, you know, sometimes even the patent, those searches are outsourced, um, to, uh, to third parties. So, but it's a good question. I'm not sure. 

[00:50:24] Ashley: Yeah, so I think, you know, there's, you know, if anybody has any other comments about quality and searching, but I think that is a huge area where, um, it will improve the patent system and kind of make some of these institutions.

more normalized for how they should behave. Um, and then I love the idea. 

[00:50:42] Dave: Oh, go ahead. I'm hopeful, but I'm also skeptical. Um, you know, um, I think we've all seen sort of good searches from examiners and less, less high, lower quality searches from other examiners. And, you know, um, I, [00:51:00] Maybe, um, that that by by taking the person out of the loop and having of not relying so heavily on the individual examiner pulling out keywords from a claim manually, but rather plugging an entire claim into a search.

Maybe that can improve the quality. But as we've all talked about, um. Prompt engineering and the human in the loop is still so important with where these tools are at now that I'm, I'm skeptical, but I'm hopeful that they'll improve and that it will be able to make at least some impact. 

[00:51:34] Ashley: Maybe there'll be like prompt engineering for examiners, right?

Like do your normal Boolean searching, which are already probably trained in. And then we're going to do some prompt engineering behind and more effectively use these tools to find more art. Right. And, um, Yeah, so machine the same 

[00:51:49] Josh: or or you take prompt engineering out of the loop and the the input is the drafted patent document, right?

And then it's it's like a little bit less user dependent and it's more [00:52:00] algorithmic to just basically say, like, try to invalidate me. 

[00:52:04] Ashley: That'd be really impressive. To get by that, yeah, yeah, that's probably the Holy Grail, right? Like going through examination, you know. What's the, but I mean, it's ultimately down to the claims, right?

The claims are the things that are being invalidated. Right. So you'd still focus on the claims. Um, yeah. And then, you know, love the Fossey, the, you know, point, you know, there is Aubrey, you know, a lot of companies, you know, again, how do you make, you know, court cases and stuff and reduce hindsight bias, which is a huge problem with the P tab is that, oh, you know, it's easy to say that something was easy or something was well known when you're 20 years removed from it or 15 years removed from it.

So I think, you know, how do you use AI to kind of define what was known at any one point in time in history? And there's lots of, you know, companies already out there that are using AI for you to be able to have conversations with dead relatives or historical figures. And so obviously you [00:53:00] can, That person is fixed in time, right?

And so, you know, could you do that to say, well, I want to know what somebody knew in the semiconductor space between 2000 and 2001 and give me, you know, kind of summarize, you know, all of the publication material that was known at that time, you know, in in 5 pages. And could that almost be like a. Not like an expert witness, you know, but an expert testimony or something, um, that goes into the record about what was kind of known.

You can still have your, you know, people, witnesses and things like that and experts. But, you know, does this provide some kind of backstop for, um, hindsight bias? I don't know if anybody has any thoughts. You have, uh, yeah, 

[00:53:41] Dave: you know, I, I, I don't know, um, I don't know the answer to this question, but I wonder if, um, data storage could be sort of an issue.

Um, I know that a lot of the models are constantly being improved and I'm not sure how much data or how easy it is to roll them back and if you [00:54:00] need to save that every day, um, I, I have a feeling it's a, it's a, it's a, uh, a solvable problem, but it may take a dedicated player who really wants to do that and has a value proposition to do it.

[00:54:14] Ashley: One could mark the data for years, right? Like, is that the data actually being marked in a chronological way, right? Like, here's the data. Yeah, and you know, I mean, you don't have 

[00:54:24] Josh: to store the entire data set copy over and over and over again. You just have to store the deltas between like, you know, we've already solved this problem with, um, you know, with code source code repositories and version histories and Google documents.

You know, it's, it's stuff like that. And to take in a plot, take those like same concepts and, you know, apply it to like this, like time traveling, you know, procedure that has access to the entire corpus of human knowledge that's time stamped, right. And can erase the hindsight bias and, you know, the [00:55:00] unringing the, unringing the bell.

Um, there's just, I know that there's like, those are obviously really, really hard problems. But I think it's also incredibly exciting in terms of what could be done, you know, like, we're not gonna, you know, we sort of gonna have to rely on the politicians to solve the eligibility thing. Like, I don't think there's anything I can do with that.

But, you know, when it comes when it comes to obviousness and in prior art, and, you know, even even enablement, I think we can get it. Theoretically, a long, a long way there in terms of more patents that get granted from the office being something closer to a property right, like the title on your home or your vehicle or something else that, you know, you can more safely use.

Reliably predictably build upon without having to worry about the rug getting pulled out from you later on because of hindsight bias or because somebody was able to find some, you know, obscure piece of prior art that [00:56:00] was, you know, overlooked by a tired human whose kids were crying in the background and, you know, needed to get on with.

You know, to the PTO meeting or something like, you know, there's just, there's a lot, like, there's a lot. I mean, it's very, it's very optimistic, but I think there's huge promise there 

[00:56:15] Kristen: for this aspect. You do not need to equate AI to a facet or to any person at all. So for this exact aspect for searching and for kind of.

Unlocking a fixed point in time. You can do this on a factual basis with a list of facts, and this just happens to be a really good computing way to do it right, like a powerful computer that can make these assessments. So as long as those assessments can be proven. These are just facts in the case, right?

Like Microsoft invented the browser at XYZ in time, or I think it was like Netscape Navigator, but, um, you know, Those are just facts and we can look those up and we can verify those so we do not have to personify this at all for this [00:57:00] particular use case right for searching for creating a point in time to say you've used hindsight bias because the first time this was brought up is here the second time was here and you know you used it against me in a way that's not.

Not correct, right? So there's just a list of facts. So we actually do not have to, what is it, anthropomorphize AI in this case. 

[00:57:27] Ashley: Yeah, that's interesting. Yeah, I mean, I think it's, I was meaning it more from just, you know, because that's like the parlance we use, and it's more like, you know, before someone even says, you know, you're using this against me, you know, that's, it wasn't brought up then, you know, could we, is there a way to, in a court proceeding, say, Somehow, like, in this six month time period, this was the general state of things, like, could it, you know, go back and say between January 1st of 2020 and January, or in July [00:58:00] 1st of, you know, 2020, you know, what was, you know, give me a summary of all the stuff that was in that space.

Right. 

[00:58:08] Kristen: And then that can be proven, cross checked, fact checked, and submitted as evidence, as a, an affidavit, or as a declaration, if you want an expert witness to look at that and assess, right? So that can all be submitted in a case now, but this is just a super powerful, easy, and quick way to get that info, right, without having to really do the research.

[00:58:29] Ashley: And then, you know, obviously, you know, to today's point, too, obviously, we can increase practitioner output, right? So on a per practitioner basis, we could be more effective. And so maybe the fact that there's not a lot of younger people being interested in patent law, maybe that doesn't really matter because we can be more effective.

Um, but also, to Dave's point, ton of investment required, like just assessing all the tools, figuring out if it's the right fit. Um, Maybe changing how you draft to accommodate those tools because they are opinionated about how [00:59:00] you use the tools. Um, obviously it has implications for workflow like to Dave's point.

Are you driving drafting mechanical or software or biotech innovations? I think that's going to be hard. I think it has to be age related implications. I think younger practitioners are going to want to pick these tools up easier and they are less Set in their drafting ways, whereas maybe older practitioners, um, or ones that are more seasoned in the field are, you know, they're more set in how they draft and it's gonna be harder to adapt to a, an opinionated AI tool.

Um, I also know that, you know, obviously like things change depending on what client you're doing, what project you're doing. So I think, I think that's hard. And then the implications have a really long tail, you know, a lot of people in the patent for, in the legal field in general, don't change. Something until they absolutely have to because they don't know what the implications are of changing it.

Until it's too late. Right. So a lot of people don't adopt things until they actually have to. And so I think that will be really interesting thing for the legal industry, because I could [01:00:00] see us at a super known quantities, like the rule based stuff, where you're just helping me renumber things. And you're helping me auto generate things that I already wrote.

I could see there being some pushback in the legal industry, just because we don't know how future. Courts or something are going to view these documents and so I think anyway, so that's kind of my spiel on this slide. If anybody, you know, wants to weigh in or have some, have some thoughts on it. 

[01:00:26] Kristen: Okay, don't laugh.

I've just adopted one space after a period, like a few years ago. So I was a double space after. Yeah, just take time that no double space. But honestly, even if there are some age related things and some practitioners who are like, Oh, I'm absolutely not going to use it. I absolutely don't want to. I think there's value in seeing what these tools can do and how you can adapt it into your practice.

Um, even. Even if you don't like what it output, I [01:01:00] guarantee it gave you an idea or a concept you would not have come up with yourself. And certainly not within that amount of time. Hopefully this tool kind of shines on its own. This, this set of AI tools shine on their own and people can see the value, but we'll see.

[01:01:17] Ashley: Yeah, very true. Uh, yeah. So otherwise I think, you know, I think it could be an innovation renaissance potentially, right? If people have. AI capability at their fingertips more and more people do and know how to leverage it or can learn how to leverage it. You know, you could end up with kind of an innovation renaissance, right?

With people kind of thinking more outside the box using AI and, you know, creating new ways of doing things. And then if AI, you know, practitioners are more supercharged with AI, you know, maybe, you know, I know it makes us more efficient, more cost effective. So, you know, give us more companies cash back in their pocket so that they can continue to innovate.

So, I mean, I [01:02:00] think it could be could be an innovation Renaissance. There's obviously like a potential other part of the world where it's not that I didn't really get into today. But in the more hopeful sense, I think. This could be really good. I think it could help us humans think differently about different areas of innovation and maybe do better and more.

Um, but we'll see, like I said, future has not been written. There's no fate. Go ahead. 

[01:02:29] Dave: Um, no, I absolutely agree with that. That that most of what we've been talking about our, um, text text generators and the machine learned that that side of machine learning and I think you could even group AI image generation in there as well.

But there's like a whole other side. Of AI and machine learning, which is like image recognition and voice recognition and, um, big data. And I mean, I know there's like really interesting companies out [01:03:00] there now who are doing virtual experimentation where you can feed, you can do for engineers and scientists in a, in a physical science setting, um, where you can feed it a whole bunch of data from previous experiments, and then it, it, you know, Can you do it's big data thing and predict how future experiments are going to work, which can save huge amounts of time in the lab, make things a lot more efficient.

And so this is, it's already happened in some industries and to some degree, and it's only, I think, going to be happening. I think I absolutely agree with this renaissance or this like real that AI is really going to enable a lot of things to the speed and the pace of a lot of things to to improve. Yeah,

[01:03:47] Ashley: absolutely. Anybody else have anything else to add? So I just, yeah, I think that was a great. So thanks for setting that up. Awesome, Dave. Appreciate it. Yeah, thanks. I was [01:04:00] just curious, um, on your guys thoughts, like, I don't know, how, you know, we're talking about AI and legal and how it's going to shape it, you know, one example I had seen was, um, I think it was a LinkedIn post, but the guy was talking about how he was using, I think it was ChatGPT or something, but to, you know, surf the internet for infringing products based on, Claims and , he was saying that it was working so great.

Uh, but it's now being choked back on, on the legal side, you know, now he tries to do the same thing and it says, you know, you should consult with a practitioner or legal counsel and Oh, funny being, being funny, being choked back. So I don't, I don't know. I think that's kind of an interesting, my guess is that it's like probably those big companies trying to CYAA little bit since you know, all of the.

The copyright stuff happened, you know, who knows if they're not putting some of those safeguards in and other areas of legal where it's like Yeah, because you're kind of, it's kind of liable, right? You're kind of suggesting that a product is [01:05:00] infringing, and if it's not really true, you, it's kind of, you know, it's, yeah.

Anyways, who knows if that's where some of those safeguards are coming in of just, hey, here's some stuff, but, you know, consult somebody. 

[01:05:12] Dave: Yeah, I applaud the OpenAI and those companies for doing that, like early there was, if you asked it to generate references, it would make a list of references that were completely fabricated.

And then I think in an improvement, they said, well, we can't give you individual references, but here are some common points in references that talk about that. So they're, they're trying to make it more accurate, less misleading. Um, right. 

[01:05:41] Ashley: So don't, don't write your appeal brief on it 

[01:05:45] Kristen: and use your fake case law, which is happening.

Um, I think that's the inherent issue that I have with AI. It's not that it's AI and it's powerful and it can do this. I think we can put guardrails on it. I think we can manage [01:06:00] what's going on and we can use the information appropriately. We can't manage people well, right? And what they're going to do, and how they're going to use it, and how they're going to be stupid or, you know, irresponsible.

And that's really tough to not be able to trust your population to, like, take this beautiful tool and use it. Respectfully and appropriately, right? They can't even do it with toys, right? It's like

[01:06:29] Josh: Back in my IT days, we had something that was called a pebcac error. And what that stood for is the problem exists between the chair and the keyboard.

Yep, that problem AI will not fix. I mean, not until it rolls us entirely. 

[01:06:50] Ashley: Yeah, it'll fix us, Mike. You know, all right. Well, we had a good run. Yeah, exactly. We got to go. But awesome. Thank you, Dave. I really [01:07:00] appreciate it. 

[01:07:00] Dave: Yeah. Thank you. 

[01:07:01] Ashley: both. Good 

[01:07:04] Dave: discussion. Yeah, 

[01:07:05] Ashley: good. Bye. All right. Talk to you tomorrow. Bye bye.

[01:07:09] Josh: All right. That's all for today, folks. Thanks for listening, and remember to check us out at aurorapatents. com for more great podcasts, blogs, and videos covering all things patent strategy. And if you're an agent or attorney and would like to be part of the discussion or an inventor with a topic you'd like to hear discussed, email us at podcast at aurorapatents.

com. Do remember that this podcast does not constitute legal advice, and until next time, keep calm and patent on.

Intro
ChatGPT vs. professionals (mind blowing stats)
Why AI is evolving so rapidly
AI problems and hallucinations
Episode overview
AI and public disclosure
Mossoff Minute: AI is a Tool
Part 1: Current state of AI patent tools
Key players in AI software tools for patents
AI patent searching
AI proofreading tools
AI patent prosecution tools
AI drafting: rule-based
Ai drafting: LLM-based
Prompt engineering and inventorship
Part 2: AI's Future Role in the Patent System
Patent system issues
Problem: PTAB
Problem: Unsustainable bar
Problem: Search quality
Time travelling PHOSITA
Increasing practitioner output and ROI
Outro