banner

Whitepaper Title

Whitepaper

Whitepaper

We apply an AI quality evaluation approach to discuss the risks of a noteworthy legal incident where GAI’s use led to substantial reputational and legal ramifications. The incident involved a lawyer who utilised ChatGPT in a lawsuit, citing nonexistent legal precedents suggested by the AI, a notable example of 'Hallucination'. Hallucination occurs when Large Language Models (LLM) produce assertive yet false statements due to the fact that they don’t have an understanding of the context of what they produce. While the lawyer's negligence in verifying the references is apparent, this incident raises critical questions about the formal requirements lawyers and their organisations must fulfill to ensure AI systems’ compliance and ethical use.

It should be underlined that this specific case unfolded within the United States. In this legal jurisdiction, case law, also known as common law precedent plays a more significant role than in many other legal systems. In contrast, countries like Germany operate under a civil law system where legal codes and statutes are emphasised more. Hence, this case's subsequent evaluation and discussion must be viewed through the US legal framework.

Importantly, our AI quality evaluation approach is universal and applicable to any AI application, be it an LLM assisting in legal cases, an Object Detection Model (identifying pedestrians for autonomous vehicles, or Classification Models aiding doctors in brain health analysis.


whitepaper outline

  • Introduction and Motivation
  • Methodology
  • Risk Identification
  • Risk Mitigation
  • Conclusion

Related serviceArtificial Intelligence

Daha fazlası

Konumunuzu seçin