Skip to content

Commit

Permalink
Rag pipeline documentation
Browse files Browse the repository at this point in the history
  • Loading branch information
estohlmann authored Nov 26, 2024
1 parent add0a61 commit 7d8447b
Show file tree
Hide file tree
Showing 4 changed files with 108 additions and 18 deletions.
31 changes: 20 additions & 11 deletions lambda/dockerimagebuilder/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -68,16 +68,16 @@ def handler(event: Dict[str, Any], context) -> Dict[str, Any]: # type: ignore [
rendered_userdata = rendered_userdata.replace("{{IMAGE_ID}}", image_tag)

try:
instances = ec2_resource.create_instances(
ImageId=ami_id,
SubnetId=os.environ.get("LISA_SUBNET_ID", None),
MinCount=1,
MaxCount=1,
InstanceType="m5.large",
UserData=rendered_userdata,
IamInstanceProfile={"Arn": os.environ["LISA_INSTANCE_PROFILE"]},
BlockDeviceMappings=[{"DeviceName": "/dev/xvda", "Ebs": {"VolumeSize": 32}}],
TagSpecifications=[
# Define common parameters
instance_params = {
"ImageId": ami_id,
"MinCount": 1,
"MaxCount": 1,
"InstanceType": "m5.large",
"UserData": rendered_userdata,
"IamInstanceProfile": {"Arn": os.environ["LISA_INSTANCE_PROFILE"]},
"BlockDeviceMappings": [{"DeviceName": "/dev/xvda", "Ebs": {"VolumeSize": 32}}],
"TagSpecifications": [
{
"ResourceType": "instance",
"Tags": [
Expand All @@ -86,7 +86,16 @@ def handler(event: Dict[str, Any], context) -> Dict[str, Any]: # type: ignore [
],
}
],
)
}

# Add SubnetId if specified in environment
if "LISA_SUBNET_ID" in os.environ:
instance_params["SubnetId"] = os.environ["LISA_SUBNET_ID"]

# Create instance with parameters
instances = ec2_resource.create_instances(**instance_params)

return {"instance_id": instances[0].instance_id, "image_tag": image_tag}

except ClientError as e:
raise e
65 changes: 64 additions & 1 deletion lib/docs/user/rag.md
Original file line number Diff line number Diff line change
@@ -1 +1,64 @@
# TODO
# Automated Document Vector Store Ingestion Pipeline

## Overview
The Automated Document Ingestion Pipeline is designed to enhance LISA's RAG capabilities. This feature provides customers with a flexible, scalable solution for loading documents into configured vector stores. Customers have two methods to load files into vector stores configured with LISA. Customers can either manually load files via the chatbot user interface (UI), or via an ingestion pipeline. Files loaded via the chatbot UI are limited by Lambda's service limits on document file size and volume. Documents loaded via a pipeline are not subject to these limits, further expanding LISA’s ingestion capabilities. This pipeline feature supports the following document file types: PDF, docx, and plain text. The individual file size limit is 50 MB.

Customers can set up many ingestion pipelines. For each pipeline, they define the vector store and embedding model, and ingestion trigger. Each pipeline can be set up to run based on an event trigger, or to run daily. From there, pre-processing kicks off to convert files into the necessary format. From there, processing kicks off to ingest the files with the specified embedding model and loads the data into the designated vector store. This feature leverages LISA’s existing chunking and vectorizing capabilities.

An upcoming release will add support for deleting files and content, as well as listing the file names and date loaded into the vector store.

## Configuration

The Automated Document Ingestion Pipeline is configurable, allowing customers to tailor the ingestion process to their specific needs. Configuration is done in YAML, adding additional optional properties to your existing RAG repository definition, which specifies various parameters for the ingestion process.

### Sample Configuration

Below is a sample configuration snippet:

```yaml
ragRepositories:
- repositoryId: pgvector-rag
type: pgvector
rdsConfig:
username: postgres
pipelines:
- chunkOverlap: 51
chunkSize: 512
embeddingModel: ${your embedding model ID}
s3Bucket: ${your source s3 bucket}
s3Prefix: /
trigger: ${daily or event (on upload)}
collectionName: project-mainline
```
### Configuration Parameters
- **chunkOverlap**: The number of tokens to overlap between chunks (51 in this example)
- **chunkSize**: The size of each document chunk (512 tokens in this example)
- **embeddingModel**: The ID of the embedding model to be used
- **s3Bucket**: The source S3 bucket where documents are stored
- **s3Prefix**: The prefix within the S3 bucket (root directory in this example)
- **trigger**: Specifies when the ingestion should occur (daily or on upload event)
- **collectionName**: The name of the collection in the vector store (project-mainline in this example)
This configuration allows customers to:
1. Define the chunking process for optimal document segmentation
2. Select the appropriate embedding model for their use case
3. Specify the source of their documents in S3
4. Choose between scheduled or event-driven ingestion
5. Organize their data into named collections within the vector store
By adjusting these parameters, customers can optimize the ingestion pipeline for their specific document types, update frequency, and retrieval requirements.
## Benefits
1. **Flexibility**: Accommodates various data sources and formats
2. **Efficiency**: Streamlines the document ingestion process with pre-processing and intelligent indexing
3. **Customization**: Allows customers to choose and easily switch between preferred vector stores
4. **Integration**: Leverages existing LISA capabilities while extending functionality
## Use Cases
- Large-scale document ingestion for enterprise customers
- Integration of external mission-critical data sources
- Customized knowledge base creation for specific industries or applications
This new Automated Document Ingestion Pipeline significantly expands LISA's capabilities, providing customers with a powerful tool for managing and utilizing their document-based knowledge more effectively.
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,7 @@ import { ModifyMethod } from '../../../shared/validation/modify-method';
import { z } from 'zod';
import { SerializedError } from '@reduxjs/toolkit';
import { getJsonDifference } from '../../../shared/util/utils';
import { setConfirmationModal } from '../../../shared/reducers/modal.reducer';

export type CreateModelModalProps = {
visible: boolean;
Expand Down Expand Up @@ -281,7 +282,17 @@ export function CreateModelModal (props: CreateModelModalProps) : ReactElement {

return (
<Modal size={'large'} onDismiss={() => {
props.setVisible(false); props.setIsEdit(false); resetState();
dispatch(
setConfirmationModal({
action: 'Abandon',
resourceName: 'Model Creation',
onConfirm: () => {
props.setVisible(false);
props.setIsEdit(false);
resetState();
},
description: 'Are you sure you want to abandon your changes?'
}));
}} visible={props.visible} header={`${props.isEdit ? 'Update' : 'Create'} Model`}>
<Wizard
i18nStrings={{
Expand Down Expand Up @@ -322,9 +333,17 @@ export function CreateModelModal (props: CreateModelModalProps) : ReactElement {
scrollToInvalid();
}}
onCancel={() => {
props.setVisible(false);
props.setIsEdit(false);
resetState();
dispatch(
setConfirmationModal({
action: 'Abandon',
resourceName: 'Model Creation',
onConfirm: () => {
props.setVisible(false);
props.setIsEdit(false);
resetState();
},
description: 'Are you sure you want to abandon your changes?'
}));
}}
onSubmit={() => {
handleSubmit();
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -18,14 +18,13 @@ import { Modal as CloudscapeModal, Box, SpaceBetween, Button } from '@cloudscape
import React, { ReactElement, useState } from 'react';
import { useAppDispatch } from '../../config/store';
import { dismissModal } from '../reducers/modal.reducer';
import { MutationActionCreatorResult } from '@reduxjs/toolkit/query';

export type CallbackFunction<T = any, R = void> = (props?: T) => R;

export type ConfirmationModalProps = {
action: string;
resourceName: string;
onConfirm: () => MutationActionCreatorResult<any>;
onConfirm: () => void;
postConfirm?: CallbackFunction;
description?: string | ReactElement;
disabled?: boolean;
Expand Down

0 comments on commit 7d8447b

Please sign in to comment.