RoPE Scaling is a process in transformers that apply rotary position encoding (RoPE) that is used to increase the length of the model context window. In RoPE Scaling, a scaling factor is applied to reduce the spacing that the RoPE process calculates between consecutive tokens when encoding the series. This allows models to be adapted either with no additional training or limited fine-tuning to increase the context window by several times.