Not to be confused with hyperparameter.
In a large language model parameters are the values that are learned by the model during training. Typically, during training, each parameter is a 16 bit or 32 bit floating point number which are refined to ensure that the model effectively outputs text that is consistent with its training data.
Typical open source models in 2023 use between 7 billion and 70 billion parameters. By contrast, the model trained in the paper Attention is All You Need that first introduced transformers had around 65 million parameters, and the model that supports GPT-4 is rumored to contain around 1.7 trillion parameters.