Yes! The obvious answer is to just increase your positions and train for that. This requires a ton of memory however (context length is squared) so most are currently training at 4k/8k and then finetuning higher similar to many of the image models.
However there's been some work that to "get extra milage" out of the current models so-to speak with rotary positions and a few other tricks. These in combination with finetuning is the current method many are using at the moment IIRC.
Here's a decent overview https://aman.ai/primers/ai/context-length-extension/
Rope - https://arxiv.org/abs/2306.15595
Yarn (based on rope) - https://arxiv.org/pdf/2309.00071.pdf
LongLoRA - https://arxiv.org/pdf/2309.12307.pdf
The bottleneck is quickly going to be inference. Since the current transformer models need the context length ^2, the memory requirements go up very quickly. IIRC a 4090 can _barely_ fit a 4bit 30B model in memory with 4096k context length.
From my understanding some form of RNNs are likely to be the next step for longer context. See RWKV as an example of a decent RNN https://arxiv.org/abs/2305.13048