# What are the parameters in LLM?

Let’s understand how parameters play an important role in LLM

The scale of present large language models (LLMs) is quantified through parameter count. GPT-3 is noted to possess 175 billion parameters. Phi-1.5, on the other hand, comprises merely 1.3 billion parameters, while Llama encompasses versions spanning from 7 billion to 70 billion parameters.

So let’s understand this in a simple analogy.

We have seen price of a product depends on various factors.

Price of a product = Manufacturing unit price + quantity.

Now here ‘** Quantity**’ is one of the

**that we take into consideration for determining the price of a product. Likewise, we can keep on adding more parameters to determine a more accurate price of the product.**

*parameters*Example: Price of a product = Manufacturing unit price + quantity + logistics + marketing + taxes ….. +, etc., and so on.

The same we can think of parameters for Large Language Models(LLM). LLM are again neural networks. The foundational element of a neural network model closely resembles our current method of determining the price of a product.

Here we have 3-layer neural network where nodes are inter-connected to each other and the connection to each node are parameter. Just like in neural networks in each layer parameters get filtered based on their importance and carried forward to the next layer to determine the price of a product.

# How Large are they for LLM?

We can adjust the parameters in the LLM layer using Low-Rank Adaptation of Large Language Models(Lora) config.

*Thanks again, for your time, if you enjoyed this short article there are tons of topics in advanced analytics, data science, and machine learning available in my medium repo. **https://medium.com/@bobrupakroy*

**Some of my alternative internet presences** are Facebook, Instagram, Udemy, Blogger, Issuu, Slideshare, Scribd, and more.

**Also available on Quora** @ https://www.quora.com/profile/Rupak-Bob-Roy

Let me know if you need anything. Talk Soon.