A curated collection of resources for exploring the fascinating field of Generative Artificial Intelligence.
Generative Artificial Intelligence is an innovative technology that leverages machine learning and deep learning algorithms to produce original content, ranging from images and sounds to texts. This repository serves as a comprehensive guide to discovering and understanding various Generative AI projects, tools, and services.
Generative AI has gained significant attention due to its ability to create unique and often indistinguishable outputs, making it applicable across diverse domains such as art, entertainment, marketing, academia, and computer science.
Contributions to this repository are encouraged and appreciated. Please refer to the Contribution Guidelines before making any submissions. You can suggest additions through pull requests or initiate discussions via issues.
- Introduction
- Getting Started
- Frameworks and Libraries
- Projects
- LLM Agents
- Articles and Papers
- Tutorials and Courses
- Webinar Recordings
- Community
- Further Reading
Youtube stats badges for your Github profile README
-
A Practical Introduction to Diffusion Models (Stable Diffusion)
Description: An introduction to Stable Diffusion, covering its practical applications and implementations. The session introduces diffusion models, explaining their role in generating images conditioned on text prompts. Diffusion models iteratively remove noise from random images, gradually refining them into high-quality outputs. The workshop covers denoising autoencoders, basic unit architectures, and applying diffusion for image enhancement and generation. Bhavish explores advanced applications like image-to-image transformations and inpainting. Stable diffusion offers stable training compared to GANs, yielding high-quality results efficiently.- Key Learnings from this video
- Learn about practical applications of Stable Diffusion
- Understand the role of diffusion models in image generation
- Explore advanced applications like image-to-image transformations
- Key Learnings from this video
-
Build a Text 2 Image Generator with Stable Diffusion
Description: Learn how to build a Text 2 Image generator using Stable Diffusion technology. -
Generative AI - An Overview of Stable Diffusion Image to Image / Text to Image
Description: An overview of Stable Diffusion technology and its applications in image and text generation.- Key Learnings from this video
- Gain insights into Stable Diffusion technology
- Learn about image-to-image and text-to-image applications
- Key Learnings from this video
-
Generative AI - An overview of GPT / ChatGPT algorithm and its applications
Description: Overview of the GPT (Generative Pre-trained Transformer) algorithm and its various applications.- Key Learnings from this video
- Understand the GPT algorithm and its applications
- Key Learnings from this video
-
Generative AI - Image Gen with Stable Diffusion - Model Training, SD Extensions
Description: Dive into the process of training models for image generation using Stable Diffusion, along with its extensions.- Key Learnings from this video
- Learn about model training for image generation
- Understand Stable Diffusion extensions
- Key Learnings from this video
-
Generative AI - Understanding AI Image Generation DALL E2 and Stable Diffusion
Description: Explore the concepts behind AI Image Generation, including DALL E2 and Stable Diffusion.- Key Learnings from this video
- Gain insights into AI Image Generation
- Understand DALL E2 and Stable Diffusion
- Key Learnings from this video
-
Generative AI - Stable Diffusion extensions with Automatic1111, ControlNet, DreamBooth
Description: Discover the extensions of Stable Diffusion technology, including Automatic1111, ControlNet, and DreamBooth.- Key Learnings from this video
- Explore Stable Diffusion extensions
- Understand applications of Automatic1111, ControlNet, and DreamBooth
- Key Learnings from this video
-
Generative AI Overview
Description: An overview of Generative AI technology, its current state, and future prospects.- Key Learnings from this video
- Understand the current state and future prospects of Generative AI
- Key Learnings from this video
-
Generative AI - Fine Tuning GPT / ChatGPT for Business Applications
Description: Learn how to fine-tune GPT and ChatGPT models for various business applications.- Key Learnings from this video
- Explore applications of GPT and ChatGPT in business
- Learn techniques for fine-tuning these models
- Key Learnings from this video
-
Can Artificial Intelligence Map Our Moods? - Researchers showed long ago that artificial intelligence models could identify a person’s basic psychological traits from their digital footprints in social media.Jan 25, 2021
-
Generative AI: Industry perspectives - March 10, 2024 If 2023 was generative AI’s breakout year, 2024 is shaping up to be the year for generative AI to prove its value. Which industries are poised to benefit most from the rapidly developing technology?
-
A Coming-Out Party for Generative A.I., Silicon Valley's New Craze - Article about the rise of generative AI, particularly the success of the Stable Diffusion image generator, and the associated controversies. New York Times, October 21, 2022.
-
3 insights from nonprofits about generative AI - Nonprofits around the world do critical work in their communities, from helping people build new skills, to studying and protecting biodiversity. But they’re also often stretched, under-resourced and weighed down by time-consuming administrative tasks.
-
2024 Generative AI Planning: How are IT Organizations Preparing? - How are CIOs and IT leaders approaching 2024 planning and budgeting to account for Generative AI (GenAI) adoption within their IT organizations?
🔥 Must-read papers for LLM-based agents.
-
A Survey on Large Language Model based Autonomous Agents
Autonomous agents have long been a research focus in academic and industry communities. Previous research often focuses on training agents with limited knowledge within isolated environments, which diverges significantly from human learning processes, and makes the agents hard to achieve human-like decisions. Recently, through the acquisition of vast amounts of web knowledge, large language models (LLMs) have shown potential in human-level intelligence, leading to a surge in research on LLM-based autonomous agents. -
An Open-source Framework for Autonomous Language Agents Agents is an open-source library/framework for building autonomous language agents. The library is carefully engineered to support important features including long-short term memory, tool usage, web navigation, multi-agent communication, and brand new features including human-agent interaction and symbolic control. With Agents, one can customize a language agent or a multi-agent system by simply filling in a config file in natural language and deploy the language agents in a terminal, a Gradio interface, or a backend service.
-
The Rise and Potential of Large Language Model Based Agents: A Survey For a long time, humanity has pursued artificial intelligence (AI) equivalent to or surpassing the human level, with AI agents considered a promising vehicle for this pursuit.
-
The Rise and Potential of Large Language Model Based Agents: A Survey
-
KnowAgent: Knowledge-Augmented Planning for LLM-Based Agents Large Language Models (LLMs) have demonstrated great potential in complex reasoning tasks, yet they fall short when tackling more sophisticated challenges, especially when interacting with environments through generating executable actions.
-
An Easy-to-use Instruction Processing Framework for Large Language Models To construct highquality instruction datasets, many instruction processing approaches have been proposed, aiming to achieve a delicate balance between data quantity and data quality.
-
A Comprehensive Study of Knowledge Editing for Large Language Models Large Language Models (LLMs) have shown extraordinary capabilities in understanding and generating text that closely mirrors human communication. However, a primary limitation lies in the significant computational demands during training, arising from their extensive parameterization. This challenge is further intensified by the dynamic nature of the world, necessitating frequent updates to LLMs to correct outdated information or integrate new knowledge, thereby ensuring their continued relevance.
🏃 Coming soon: Add one-sentence intro to each paper.