Skip to content

Latest commit

 

History

History
284 lines (170 loc) · 18.3 KB

README.md

File metadata and controls

284 lines (170 loc) · 18.3 KB

LLM-engineer-handbook

🔥 Large Language Models(LLM) have taken the NLP community AI community the Whole World by storm.

Why do we create this repo?

  • Everyone can now build an LLM demo in minutes, but it takes a real LLM/AI expert to close the last mile of performance, security, and scalability gaps.
  • The LLM space is complicated! This repo provides a curated list to help you navigate so that you are more likely to build production-grade LLM applications. It includes a collection of Large Language Model frameworks and tutorials, covering model training, serving, fine-tuning, LLM applications & prompt optimization, and LLMOps.

However, classical ML is not going away. Even LLMs need them. We have seen classical models used for protecting data privacy, detecing hallucinations, and more. So, do not forget to study the fundamentals of classical ML.

Overview

The current workflow might look like this: You build a demo using an existing application library or directly from LLM model provider SDKs. It works somehow, but you need to further create evaluation and training datasets to optimize the performance (e.g., accuracy, latency, cost).

You can do prompt engineering or auto-prompt optimization; you can create a larger dataset to fine-tune the LLM or use Direct Preference Optimization (DPO) to align the model with human preferences. Then you need to consider the serving and LLMOps to deploy the model at scale and pipelines to refresh the data.

We organize the resources by (1) tracking all libraries, frameworks, and tools, (2) learning resources on the whole LLM lifecycle, (3) understanding LLMs, (4) social accounts and community, and (5) how to contribute to this repo.

Libraries & Frameworks & Tools

Applications

Build & Auto-optimize

  • AdalFlow - The library to build & auto-optimize LLM applications, from Chatbot, RAG, to Agent by SylphAI.

  • dspy - DSPy: The framework for programming—not prompting—foundation models.

Build

  • LlamaIndex — A Python library for augmenting LLM apps with data.
  • LangChain — A popular Python/JavaScript library for chaining sequences of language model prompts.
  • Haystack — Python framework that allows you to build applications powered by LLMs.
  • Instill Core — A platform built with Go for orchestrating LLMs to create AI applications.

Prompt Optimization

  • AutoPrompt - A framework for prompt tuning using Intent-based Prompt Calibration.
  • PromptFify - A library for prompt engineering that simplifies NLP tasks (e.g., NER, classification) using LLMs like GPT.

Others

  • LiteLLM - Python SDK, Proxy Server (LLM Gateway) to call 100+ LLM APIs in OpenAI format.

Pretraining

  • PyTorch - PyTorch is an open source machine learning library based on the Torch library, used for applications such as computer vision and natural language processing.
  • TensorFlow - TensorFlow is an open source machine learning library developed by Google.
  • JAX - Google’s library for high-performance computing and automatic differentiation.
  • tinygrad - A minimalistic deep learning library with a focus on simplicity and educational use, created by George Hotz.
  • micrograd - A simple, lightweight autograd engine for educational purposes, created by Andrej Karpathy.

Fine-tuning

  • Transformers - Hugging Face Transformers is a popular library for Natural Language Processing (NLP) tasks, including fine-tuning large language models.
  • Unsloth - Finetune Llama 3.2, Mistral, Phi-3.5 & Gemma 2-5x faster with 80% less memory!
  • LitGPT - 20+ high-performance LLMs with recipes to pretrain, finetune, and deploy at scale.

Serving

  • TorchServe - An open-source model serving library developed by AWS and Facebook specifically for PyTorch models, enabling scalable deployment, model versioning, and A/B testing.

  • TensorFlow Serving - A flexible, high-performance serving system for machine learning models, designed for production environments, and optimized for TensorFlow models but also supports other formats.

  • Ray Serve - Part of the Ray ecosystem, Ray Serve is a scalable model-serving library that supports deployment of machine learning models across multiple frameworks, with built-in support for Python-based APIs and model pipelines.

  • NVIDIA TensorRT-LLM - TensorRT-LLM is NVIDIA's compiler for transformer-based models (LLMs), providing state-of-the-art optimizations on NVIDIA GPUs.

  • NVIDIA Triton Inference Server - A high-performance inference server supporting multiple ML/DL frameworks (TensorFlow, PyTorch, ONNX, TensorRT etc.), optimized for NVIDIA GPU deployments, and ideal for both cloud and on-premises serving.

  • ollama - A lightweight, extensible framework for building and running large language models on the local machine.

  • llama.cpp - A library for running LLMs in pure C/C++. Supported architectures include (LLaMA, Falcon, Mistral, MoEs, phi and more)

  • TGI - HuggingFace's text-generation-inference toolkit for deploying and serving LLMs, built on top of Rust, Python and gRPC.

  • vllm - An optimized, high-throughput serving engine for large language models, designed to efficiently handle massive-scale inference with reduced latency.

  • sglang - SGLang is a fast serving framework for large language models and vision language models.

  • LitServe - LitServe is a lightning-fast serving engine for any AI model of any size. Flexible. Easy. Enterprise-scale.

Prompt Management

  • Opik - Opik is an open-source platform for evaluating, testing and monitoring LLM applications

Datasets

Use Cases

  • Datasets - A vast collection of ready-to-use datasets for machine learning tasks, including NLP, computer vision, and audio, with tools for easy access, filtering, and preprocessing.
  • Argilla - A UI tool for curating and reviewing datasets for LLM evaluation or training.
  • distilabel - A library for generating synthetic datasets with LLM APIs or models.

Fine-tuning

  • LLMDataHub - A quick guide (especially) for trending instruction finetuning datasets
  • LLM Datasets - High-quality datasets, tools, and concepts for LLM fine-tuning.

Pretraining

Benchmarks

  • lighteval - A library for evaluating local LLMs on major benchmarks and custom tasks.

  • evals - OpenAI's open sourced evaluation framework for LLMs and systems built with LLMs.

  • ragas - A library for evaluating and optimizing LLM applications, offering a rich set of eval metrics.

Agent

Learning Resources for LLMs

We will categorize the best resources to learn LLMs, from modeling to training, and applications.

Applications

General

Agent

Lectures

  • LLM Agents MOOC - A playlist of 11 lectures by the Berkeley RDI Center on Decentralization & AI, featuring guest speakers like Yuandong Tian, Graham Neubig, Omar Khattab, and others, covering core topics on Large Language Model agents. CS294

Projects

  • OpenHands - Open source agents for developers by AllHands.
  • CAMEL - First LLM multi-agent framework and an open-source community dedicated to finding the scaling law of agents. by CAMEL-AI.
  • swarm - Educational framework exploring ergonomic, lightweight multi-agent orchestration. Managed by OpenAI Solution team.
  • AutoGen - A programming framework for agentic AI 🤖 by Microsoft.
  • CrewAI - 🤖 CrewAI: Cutting-edge framework for orchestrating role-playing, autonomous AI agents.

Modeling

Training

  • Chip's Blog - Chip Huyen's blog on training LLMs, including the latest research, tutorials, and best practices.
  • Lil'Log - Lilian Weng(OpenAI)'s blog on machine learning, deep learning, and AI, with a focus on LLMs and NLP.

Fine-tuning

  • DPO: Rafailov, Rafael, et al. "Direct preference optimization: Your language model is secretly a reward model." Advances in Neural Information Processing Systems 36 (2024). Code.

Fundamentals

  • Intro to LLMs - A 1 hour general-audience introduction to Large Language Models by Andrej Karpathy.
  • Building GPT-2 from Scratch - A 4 hour deep dive into building GPT2 from scratch by Andrej Karpathy.

Books

Newsletters

  • Ahead of AI - Sebastian Raschka's Newsletter, covering end-to-end LLMs understanding.
  • Decoding ML - Content on building production GenAI, RecSys and MLOps applications.

Auto-optimization

  • TextGrad - Automatic ''Differentiation'' via Text -- using large language models to backpropagate textual gradients.

Understanding LLMs

It can be fun and important to understand the capabilities, behaviors, and limitations of LLMs. This can directly help with prompt engineering.

In-context Learning

Reasoning & Planning

Social Accounts & Community

Social Accounts

Social accounts are the best ways to stay up-to-date with the lastest LLM research, industry trends, and best practices.

Name Social Expertise
Li Yin LinkedIn AdalFlow Author & SylphAI founder
Chip Huyen LinkedIn AI Engineering & ML Systems
Damien Benveniste, PhD LinkedIn ML Systems & MLOps
Jim Fan LinkedIn LLM Agents & Robotics
Paul Iusztin LinkedIn LLM Engineering & LLMOps
Armand Ruiz LinkedIn AI Engineering Director at IBM
Alex Razvant LinkedIn AI/ML Engineering
Pascal Biese LinkedIn LLM Papers Daily
Maxime Labonne LinkedIn LLM Fine-Tuning
Sebastian Raschka LinkedIn LLMs from Scratch
Zach Wilson LinkedIn Data Engineering for LLMs
Eduardo Ordax LinkedIn GenAI voice @ AWS

Community

Name Social Scope
AdalFlow Discord LLM Engineering, auto-prompts, and AdalFlow discussions&contributions

Contributing

Only with the power of the community can we keep this repo up-to-date and relevant. If you have any suggestions, please open an issue or a direct pull request.

I will keep some pull requests open if I'm not sure if they are not an instant fit for this repo, you could vote for them by adding 👍 to them.

Thanks to the community, this repo is getting read by more people every day.

Star History Chart


🤝 Please share so we can continue investing in it and make it the go-to resource for LLM engineers—whether they are just starting out or looking to stay updated in the field.

Share on X Share on LinkedIn


If you have any question about this opinionated list, do not hesitate to contact Li Yin