Smart or Slick?
In our last newsletter we predicted that the effects of Artificial Intelligence (A.I.) chat bots on the legal profession will be substantially less than the current hype.
Last week, three researchers at Stanford and U.C. Berkeley reported that the “intelligence” demonstrated by leading large language model services (“artificial intelligence” agents, in this case ChatGPT 3.5 and 4.0) may not be so much intelligent as facile. Like the many smooth human talkers we’ve all known, GPT can put up an appearance of intelligence that is only skin deep, a facade. In this test, GPT’s cognitive abilities declined in selected areas both across generational versions and over time, sometimes substantially and without warning.
Yes, that’s right. AI has gotten dumber.
At TheFormTool we sell tools that leverage real intelligence, to help our customers’ real smarts compete efficiently against the artificial players.
The practical effect was illustrated in a Southern District of New York case where sanctions were ordered against two attorneys and their firm after they offered a brief that contained six citations “made up out of whole cloth” by ChatCPT and included in their filings. Opposing counsel researched the citations and found them non-existent. The judge was not pleased.
An additional concern not directly addressed in the case is a worry about disclosure to others of information relating to representation of a client. Reuters reported, “‘That’s one reason why some law firms have explicitly told lawyers not to use ChatGPT and similar programs on client matters,’ said Holland & Knight partner Josias Dewey, who has been working on developing internal artificial intelligence programs at his firm.”