Main Page

From llamawiki.ai
Revision as of 08:18, 6 August 2023 by 65ty3j (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
A studious llama

Welcome to llamawiki.ai, a wiki for reference information relating to Open Source Large Language Models and other Transformers that anyone can run on a home PC. This site aims to give you the references, context and knowledge you need to run your own self-hosted transformers.

Introduction

LLaMA is an open-source large language model (LLM) developed by Meta. It is one of a range of open LLM's released starting in the first half of 2023. See Category:Models for more examples. LLaMA, like other LLMs, is designed to generate human-like text and can be used for a variety of tasks including answering questions, writing essays, summarizing text, translating languages, and more.

Software

To run an open source LLM locally, you will require appropriate software. Several implementations of the Transformer models have been released, that take weights and biases released with each model and loads them into video or system RAM in order to generate text (perform inference). These work in conjunction with user interfaces to allow users to generate text.

Models can be tuned either by fine-tuning or the application of LoRA processes.

Theory

The open source LLMs use the principles of transformer-based language models, which were first outlined in the paper Attention is All You Need. It utilizes an attention mechanism to weigh the importance of different words in a given context when generating text. The architecture of LLaMA consists of multiple layers of transformer blocks, each of which performs a series of complex operations to generate the next word in a sentence. The theory page contains links to details of how LLaMA models operate.

Menu