LangChain Module I/O — LLM

Tony
7 min readDec 31, 2023

In my previous article, I introduced “Prompt” components in LangChain Module I/O, let’s talk about “Language Models” in this article.

Large language models aren’t limited to just ChatGPT. While using OpenAI’s API is convenient and efficient, what if I want to use a different model, such as the open-source Llama2 or ChatGLM? How would I go about doing that?

The History of LLM Development

Google’s seminal 2018 paper, “Attention is All You Need,” introduced the Transformer architecture, sparking a significant leap in AI development. The Transformer has become the foundational architecture underlying almost all pretrained models. Large-scale language models based on Transformer pretraining are often referred to as “Foundation Models” or “Base Models.”

During their training phase, these models acquire extensive linguistic knowledge, encompassing vocabulary, grammar, sentence structure, and contextual information. This knowledge, learned from vast datasets, provides a versatile and rich linguistic foundation for various downstream tasks, such as sentiment analysis, text classification, named entity recognition, and question-answering systems. This advancement has opened up possibilities for addressing many complex problems in natural language processing (NLP).

In the early days of pretrained models, BERT unquestionably stood as the most iconic and influential model. By learning both forward and backward contextual information in text, BERT achieved a profound understanding of sentence structures.

Following BERT, a plethora of large-scale pretrained models emerged rapidly, ushering in a new era in the field of natural language processing (NLP). These models have significantly accelerated the advancement of NLP technologies, tackling numerous previously challenging problems, such as translation, text summarization, and conversational dialogues, thereby providing powerful tools for the field.

--

--