<aside>
💡
相關頁面: AI相關工具非常非常的多,RAG開發工具會整理在RAG開發工具,LLM以及相關工具會整理在Large Language Models,Graph RAG開發工具會整理在Graph RAG,其他工具整理在Generative AI開發工具中。目前還有一些資料在Generative AI及RAG中,將逐步整理這些內容。
</aside>
介紹
<aside>
🚧
未來會將LLM相關連結移到此頁
</aside>
- LLM十大觀念
- Understanding LLMs from scratch using middle school math
- What makes large language models work so well?
- Embeddings
- Subword Tokenizers
- Self Attention
- Softmax
- Residual connections
- Layer Normalization
- Dropout
- Multi-head Attention
- Positional encoding and embedding
- The GPT architecture
- The transformer architecture
- Decoding an LLM’s Thoughts: Logit Lens in Just 25 Lines of Code
- The Language Model Landscape — Version 7
- Zone 1 — Language Models Disruption
- Zone 2 — General Use-Cases
- Zone 3 — Specific Implementations
- Zone 4 — Commercial Model Providers
- Zone 5 — Model Diversification
- Zone 6 — Foundation Tooling
- Zone 7— End User UIs
- Language Model Categorisation
- Understanding Model Size From Small to Extra-Large Models
- Large Language Models
- Medium to Small Language Models
- Expanding Beyond Language with Multi-Modal Models
- Action Models (also known as Large Action Models)
- LLM Architectures Explained
- Unmasking the Surprising Diversity of AI Hallucinations
- Your LLM knows when it’s lying to you. (Friend Link)
- the Statement Accuracy Prediction, based on Language Model Activations, is a method that uses the activations to predict the accuracy of a statement outputted by an LLM.
- Step-by-Step Exploration of Transformer Attention Mechanisms (Friend Link)
- Which AI Model is the Best in 2024?
- ChatPlayground is a comprehensive AI platform that offers 16 powerful AI tools in one subscription. It provides industry-leading AI models, a prompt library for various use cases, real-time web search capabilities, image generation, history recall, multilingual support, and more. It is designed for developers, data scientists, students, researchers, content creators, writers, and AI enthusiasts.
- Advanced LLM Chain Orchestration: Building Complex AI Systems That Scale (Friend Link)
- The Next Frontier in LLM Accuracy: Exploring the Power of Lamini Memory Tuning
Text Splitting / Chunking
Embedding
- Vector Embeddings Explained for Developers!
- Types of vector embeddings
- Word embeddings
- Sentence and document embeddings
- Graph embeddings
- Image embeddings
- Text Embeddings: Comprehensive Guide
- Practical applications
- Clustering
- Classification
- Finding anomalies
- To be able to work with an extensive knowledge base, we can leverage the RAG approach:
- Compute embeddings for all the documents and store them in vector storage.
- When we get a user request, we can calculate its embedding and retrieve relevant documents from the storage for this request.
- Pass only relevant documents to LLM to get a final answer.