Diese Website verwendet Cookies und ähnliche Technologien. Dabei handelt es sich um kleine Textdateien, die auf eurem Computer gespeichert und ausgelesen werden. Indem ihr auf "Alles akzeptieren" klickt, stimmt ihr der Verarbeitung von Daten, der Erstellung und Verarbeitung von individuellen Nutzungsprofilen über Websites und über Partner und Geräte hinweg sowie der Übermittlung eurer Daten an Drittanbieter zu, die eure Daten teilweise in Ländern außerhalb der Europäischen Union verarbeiten (GDPR Art. 49). Einzelheiten hierzu findet ihr in den Datenschutzhinweisen. Die Daten werden für Analysen und für eigene Zwecke Dritter verwendet. Weitere Informationen, auch über die Datenverarbeitung durch Drittanbieter und die Möglichkeit des Widerrufs, findet ihr in den Einstellungen und in unseren Datenschutzhinweisen. Hier könnt ihr mit den notwendigen Tools fortfahren.
- Verlag: Springer, Berlin
- Autor: Erik Cambria
- Artikel-Nr.: KNV97596777
- ISBN: 9783031739736
About half a century ago, AI pioneers like Marvin Minsky embarked on the ambitious project of emulating how the human mind encodes and decodes meaning. While today we have a better understanding of the brain thanks to neuroscience, we are still far from unlocking the secrets of the mind, especially when it comes to language, the prime example of human intelligence. "Understanding natural language understanding", i.e., understanding how the mind encodes and decodes meaning through language, is a significant milestone in our journey towards creating machines that genuinely comprehend human language. Large language models (LLMs) such as GPT-4 have astounded us with their ability to generate coherent, contextually relevant text, seemingly bridging the gap between human and machine communication. Yet, despite their impressive capabilities, these models operate on statistical patterns rather than true comprehension.
This textbook delves into the nuanced differences between these two paradigms and explores the future of AI as we strive to achieve true natural language understanding (NLU). LLMs excel at identifying and replicating patterns within vast datasets, producing responses that appear intelligent and meaningful. They can generate text that mimics human writing styles, provide summaries of complex documents, and even engage in extended dialogues with users. However, their limitations become evident when they encounter tasks that require deeper understanding, reasoning, and contextual knowledge. An NLU system that deconstructs meaning leveraging linguistics and semiotics (on top of statistical analysis) represents a more profound level of language comprehension. It involves understanding context in a manner similar to human cognition, discerning subtle meanings, implications, and nuances that current LLMs might miss or misinterpret. NLU grasps the semantics behind words and sentences, comprehending synonyms, metaphors, idioms, and abstract concepts with precision.
This textbook explores the current state of LLMs, their capabilities and limitations, and contrasts them with the aspirational goals of NLU. The author delves into the technical foundations required for achieving true NLU, including advanced knowledge representation, hybrid AI systems, and neurosymbolic integration, while also examining the ethical implications and societal impacts of developing AI systems that genuinely understand human language. Containing exercises, a final assignment and a comprehensive quiz, the textbook is meant as a reference for courses on information retrieval, AI, NLP, data analytics, data mining and more.