Skip to content

road-core/service

Repository files navigation

About The Project

License made-with-python

Road Core Service (RCS) is an AI powered assistant that runs on OpenShift and provides answers to product questions using backend LLM services. Currently OpenAI, Azure OpenAI, OpenShift AI, RHEL AI, and Watsonx are officially supported as backends. Other providers, even ones that are not fully supported, can be used as well. For example, it is possible to use BAM (IBM's research environment). It is also possible to run InstructLab locally, configure model, and connect to it.

Prerequisites

  • Python 3.11 or Python 3.12
    • please note that currently Python 3.13 is not officially supported, because Road Core Service depends on some packages that can not be used in this Python version
    • all sources are made (backward) compatible with Python 3.11; it is checked on CI
  • Git, pip and PDM
  • An LLM API key or API secret (in case of Azure OpenAI)
  • (Optional) extra certificates to access LLM API

Installation

1. Clone the repo

git clone https://github.com/road-core/service.git
cd service

2. Install python packages

make install-deps

3. Get API keys

This step depends on provider type

OpenAI

Please look into (OpenAI api key)

Azure OpenAI

Please look at following articles describing how to retrieve API key or secret from Azure: Get subscription and tenant IDs in the Azure portal and How to get client id and client secret in Azure Portal. Currently it is possible to use both ways to auth. to Azure OpenAI: by API key or by using secret

WatsonX

Please look at into Generating API keys for authentication

OpenShift AI

(TODO: to be updated)

RHEL AI

(TODO: to be updated)

BAM (not officially supported)

1. Get a BAM API Key at [https://bam.res.ibm.com](https://bam.res.ibm.com)
    * Login with your IBM W3 Id credentials.
    * Copy the API Key from the Documentation section.
    ![BAM API Key](docs/bam_api_key.png)
2. BAM API URL: https://bam-api.res.ibm.com

Locally running InstructLab

Depends on configuration, but usually it is not needed to generate or use API key.

4. Store local copies of API keys securely

Here is a proposed scheme for storing API keys on your development workstation. It is similar to how private keys are stored for OpenSSH. It keeps copies of files containing API keys from getting scattered around and forgotten:

$ cd <road-core/service local git repo root>
$ find ~/.openai -ls
72906922      0 drwx------   1 username username        6 Feb  6 16:45 /home/username/.openai
72906953      4 -rw-------   1 username username       52 Feb  6 16:45 /home/username/.openai/key
$ ls -l openai_api_key.txt
lrwxrwxrwx. 1 username username 26 Feb  6 17:41 openai_api_key.txt -> /home/username/.openai/key
$ grep openai_api_key.txt rcsconfig.yaml
 credentials_path: openai_api_key.txt

Configuration

1. Configure Road Core Service (RCS)

Service configuration is in YAML format. It is loaded from a file referred to by the RCS_CONFIG_FILE environment variable and defaults to rcsconfig.yaml in the current directory. You can find a example configuration in the examples/rcsconfig.yaml file in this repository.

2. Configure LLM providers

The example configuration file defines providers for six LLM providers: BAM, OpenAI, Azure OpenAI, Watsonx, OpenShift AI VLLM (RHOAI VLLM), and RHELAI (RHEL AI), but defines BAM as the default provider. If you prefer to use a different LLM provider than BAM, such as OpenAI, ensure that the provider definition points to a file containing a valid OpenAI, Watsonx etc. API key, and change the default_model and default_provider values to reference the selected provider and model.

The example configuration also defines locally running provider InstructLab which is OpenAI-compatible and can use several models. Please look at instructlab pages for detailed information on how to set up and run this provider.

API credentials are in turn loaded from files specified in the config YAML by the credentials_path attributes. If these paths are relative, they are relative to the current working directory. To use the example rcsconfig.yaml as is, place your BAM API Key into a file named bam_api_key.txt in your working directory.

[!NOTE] There are two supported methods to provide credentials for Azure OpenAI. The first method is compatible with other providers, i.e. credentials_path contains a directory name containing one file with API token. In the second method, that directory should contain three files named tenant_id, client_id, and client_secret. Please look at following articles describing how to retrieve this information from Azure: Get subscription and tenant IDs in the Azure portal and How to get client id and client secret in Azure Portal.

OpenAI provider

Multiple models can be configured, but default_model will be used, unless specified differently via REST API request:

  type: openai
  url: "https://api.openai.com/v1"
  credentials_path: openai_api_key.txt
  models:
    - name: gpt-4-1106-preview
    - name: gpt-4o-mini

Azure OpenAI

Make sure the url and deployment_name are set correctly.

- name: my_azure_openai
  type: azure_openai
  url: "https://myendpoint.openai.azure.com/"
  credentials_path: azure_openai_api_key.txt
  deployment_name: my_azure_openai_deployment_name
  models:
    - name: gpt-4o-mini

WatsonX

Make sure the project_id is set up correctly.

- name: my_watsonx
  type: watsonx
  url: "https://us-south.ml.cloud.ibm.com"
  credentials_path: watsonx_api_key.txt
  project_id: XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX
  models:
    - name: ibm/granite-13b-chat-v2

RHEL AI provider

It is possible to use RHELAI as a provider too. That provider is OpenAI-compatible and can be configured the same way as other OpenAI providers. For example if RHEL AI is running as EC2 instance and granite-7b-lab model is deployed, the configuration might look like:

    - name: my_rhelai
      type: openai
      url: "http://{PATH}.amazonaws.com:8000/v1/"
      credentials_path: openai_api_key.txt
      models:
        - name: granite-7b-lab

Red Hat OpenShift AI

To use RHOAI (Red Hat OpenShiftAI) as provider, the following configuration can be utilized (mistral-7b-instruct model is supported by RHOAI, as well as other models):

    - name: my_rhoai
      type: openai
      url: "http://{PATH}:8000/v1/"
      credentials_path: openai_api_key.txt
      models:
        - name: mistral-7b-instruct

Local ollama server

It is possible to configure the service to use local ollama server. Please look into an examples/olsconfig-local-ollama.yaml file that describes all required steps.

  1. Common providers configuration options

    • name: unique name, can be any proper YAML literal

    • type: provider type: any of bam, openai, azure_openai, rhoai_vllm, rhelai_vllm, or watsonx

    • url: URL to be used to call LLM via REST API

    • api_key: path to secret (token) used to call LLM via REST API

    • models: list of models configuration (model name + model-specific parameters)

      Notes:

      • Context window size varies based on provider/model.
      • Max response tokens depends on user need and should be in reasonable proportion to context window size. If value is too less then there is a risk of response truncation. If we set it too high then we will reserve too much for response & truncate history/rag context unnecessarily.
      • These are optional setting, if not set; then default will be used (which may be incorrect and may cause truncation & potentially error by exceeding context window).
  2. Specific configuration options for WatsonX

    • project_id: as specified on WatsonX AI page
  3. Specific configuration options for Azure OpenAI

    • api_version: as specified in official documentation, if not set; by default 2024-02-15-preview is used.
    • deployment_name: as specified in AzureAI project settings
  4. Default provider and default model

    • one provider and its model needs to be selected as default. When no provider+model is specified in REST API calls, the default provider and model are used:

         rcs_config:
           default_provider: my_bam
           default_model: ibm/granite-13b-chat-v2

3. Configure RCS Authentication

[!NOTE] Currently, only K8S-based authentication can be used. In future versions, more authentication mechanisms will be configurable.

This section provides guidance on how to configure authentication within RCS. It includes instructions on enabling or disabling authentication, configuring authentication through OCP RBAC, overriding authentication configurations, and specifying a static authentication token in development environments.

  1. Enabling and Disabling Authentication

    Authentication is enabled by default in RCS. To disable authentication, modify the dev_config in your configuration file as shown below:

       dev_config:
          disable_auth: true
  2. Configuring Authentication with OCP RBAC

    RCS utilizes OCP RBAC for authentication, necessitating connectivity to an OCP cluster. It automatically selects the configuration from the first available source, either an in-cluster configuration or a KubeConfig file.

  3. Overriding Authentication Configuration

    You can customize the authentication configuration by overriding the default settings. The configurable options include:

    • Kubernetes Cluster API URL (k8s_cluster_api): The URL of the K8S/OCP API server where tokens are validated.
    • CA Certificate Path (k8s_ca_cert_path): Path to a CA certificate for clusters with self-signed certificates.
    • Skip TLS Verification (skip_tls_verification): If true, the Kubernetes client skips TLS certificate validation for the OCP cluster.

    To apply any of these overrides, update your configuration file as follows:

       rcs_config:
          authentication_config:
             k8s_cluster_api: "https://api.example.com:6443"
             k8s_ca_cert_path: "/Users/home/ca.crt"
             skip_tls_verification: false
  4. Providing a Static Authentication Token in Development Environments

    For development environments, you may wish to use a static token for authentication purposes. This can be configured in the dev_config section of your configuration file:

       dev_config:
          k8s_auth_token: your-user-token

    Note: using static token will require you to set the k8s_cluster_api mentioned in section 6.4, as this will disable the loading of OCP config from in-cluster/kubeconfig.

4. Configure RCS TLS communication

This section provides instructions on configuring TLS (Transport Layer Security) for the RCS Application, enabling secure connections via HTTPS. TLS is enabled by default; however, if necessary, it can be disabled through the dev_config settings.

  1. Enabling and Disabling TLS

    By default, TLS is enabled in RCS. To disable TLS, adjust the dev_config in your configuration file as shown below:

       dev_config:
          disable_tls: false
  2. Configuring TLS in local Environments:

    1. Generate Self-Signed Certificates: To generate self-signed certificates, run the following command from the project's root directory:
         ./scripts/generate-certs.sh
    2. Update RCS Configuration: Modify your config.yaml to include paths to your certificate and its private key:
         rcs_config:
            tls_config:
               tls_certificate_path: /full/path/to/certs/cert.pem
               tls_key_path: /full/path/to/certs/key.pem
    3. Launch RCS with HTTPS: After applying the above configurations, RCS will run over HTTPS.
  3. Configuring RCS in OpenShift:

    For deploying in OpenShift, Service-Served Certificates can be utilized. Update your rcs-config.yaml as shown below, based on the example provided in the examples directory:

       rcs_config:
          tls_config:
             tls_certificate_path: /app-root/certs/cert.pem
             tls_key_path: /app-root/certs/key.pem
  4. Using a Private Key with a Password If your private key is encrypted with a password, specify a path to a file that contains the key password as follows:

       rcs_config:
          tls_config:
             tls_key_password_path: /app-root/certs/password.txt

5. (Optional) Configure the local document store

The following command downloads a copy of the whole image containing RAG embedding model and vector database:

make get-rag

Please note that the link to the specific image to be downloaded is stored in the file build.args (and that file is autoupdated by bots when new a RAG is re-generated):

6. (Optional) Configure conversation cache

Conversation cache can be stored in memory (it's content will be lost after shutdown) or in PostgreSQL database. It is possible to specify storage type in rcsconfig.yaml configuration file.

  1. Cache stored in memory:
    rcs_config:
       conversation_cache:
          type: memory
          memory:
          max_entries: 1000
  2. Cache stored in PostgreSQL:
    conversation_cache:
       type: postgres
       postgres:
          host: "foobar.com"
          port: "1234"
          dbname: "test"
          user: "user"
          password_path: postgres_password.txt
          ca_cert_path: postgres_cert.crt
          ssl_mode: "require"
    In this case, file postgres_password.txt contains password required to connect to PostgreSQL. Also CA certificate can be specified using postgres_ca_cert.crt to verify trusted TLS connection with the server. All these files needs to be accessible.

7. (Optional) Incorporating additional CA(s). You have the option to include an extra TLS certificate into the RCS trust store as follows.

      rcs_config:
         extra_ca:
            - "path/to/cert_1.crt"
            - "path/to/cert_2.crt"

This action may be required for self-hosted LLMs.

8. (Optional) Configure the number of workers

By default the number of workers is set to 1, you can increase the number of workers to scale up the REST api by modifying the max_workers config option in rcsconfig.yaml.

      rcs_config:
        max_workers: 4

9. Registering a new LLM provider

Please look here for more info.

10. TLS security profiles

TLS security profile can be set for the service itself and also for any configured provider. To specify TLS security profile for the service, the following section can be added into rcs section in the rcsconfig.yaml configuration file:

  tlsSecurityProfile:
    type: OldType
    ciphers:
        - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
        - TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
    minTLSVersion: VersionTLS13
  • type can be set to: OldType, IntermediateType, ModernType, or Custom
  • minTLSVersion can be set to: VersionTLS10, VersionTLS11, VersionTLS12, or VersionTLS13
  • ciphers is list of enabled ciphers. The values are not checked.

Please look into examples folder that contains olsconfig.yaml with filled-in TLS security profile for the service. Additionally the TLS security profile can be set for any configured provider. In this case the tlsSecurityProfile needs to be added into the olsconfig.yaml file into llm_providers/{selected_provider} section. For example:

llm_providers:
  - name: my_openai
    type: openai
    url: "https://api.openai.com/v1"
    credentials_path: openai_api_key.txt
    models:
      - name: gpt-4-1106-preview
      - name: gpt-4o-mini
    tlsSecurityProfile:
      type: Custom
      ciphers:
          - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
          - TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
      minTLSVersion: VersionTLS13

[!NOTE] The tlsSecurityProfile is fully optional. When it is not specified, the LLM call won't be affected by specific SSL/TLS settings.

11. Fine tuning

The service uses the, so called, system prompt to put the question into context before the question is sent to the selected LLM. The default system prompt is fine tuned for questions about OpenShift and Kubernetes. It is possible to use a different system prompt via the configuration option system_prompt_path in the rcs_config section. That option must contain the path to the text file with the actual system prompt (can contain multiple lines). An example of such configuration:

rcs_config:
  system_prompt_path: "system_prompts/system_prompt_for_product_XYZZY"

Usage

Deployments

Local Deployment

RCS service can be started locally. In this case GradIO web UI is used to interact with the service. Alternatively the service can be accessed through REST API.

[!TIP] To enable GradIO web UI you need to have the following dev_config section in your configuration file:

dev_config:
  enable_dev_ui: true
  ...
  ...
  ...

Run the server

If Python virtual environment is setup already, it is possible to start the service by following command:

make run

It is also possible to initialize virtual environment and start the service by using just one command:

pdm start

Optionally run with podman

There is an all-in-one image that has the document store included already.

  1. Follow steps above to create your config yaml and your API key file(s).

  2. Place your config yaml and your API key file(s) in a known location (eg: /path/to/config)

  3. Make sure your config yaml references the config folder for the path to your key file(s) (eg: credentials_path: config/openai_api_key.txt)

  4. Run the all-in-one-container. Example invocation:

     podman run -it --rm -v `/path/to/config:/app-root/config:Z \
     -e RCS_CONFIG_FILE=/app-root/config/rcsconfig.yaml -p 8080:8080 \
     quay.io/openshift-lightspeed/lightspeed-service-api:latest

Optionally run inside an OpenShift environment

In the examples folder is a set of YAML manifests, openshift-lightspeed.yaml. This includes all the resources necessary to get Road Core Service running in a cluster. It is configured expecting to only use OpenAI as the inference endpoint, but you can easily modify these manifests, looking at the rcsconfig.yaml to see how to alter it to work with BAM as the provider.

There is a commented-out OpenShift Route with TLS Edge termination available if you wish to use it.

To deploy, assuming you already have an OpenShift environment to target and that you are logged in with sufficient permissions:

  1. Make the change to your API keys and/or provider configuration in the manifest file
  2. Create a namespace/project to hold RCS
  3. oc apply -f examples/openshift-lightspeed-tls.yaml -n created-namespace

Once deployed, it is probably easiest to oc port-forward into the pod where RCS is running so that you can access it from your local machine.

Communication with the service

Query the server

To send a request to the server you can use the following curl command:

curl -X 'POST' 'http://127.0.0.1:8080/v1/query' -H 'accept: application/json' -H 'Content-Type: application/json' -d '{"query": "write a deployment yaml for the mongodb image"}'

Swagger UI

Web page with Swagger UI has the standard /docs endpoint. If the service is running on localhost on port 8080, Swagger UI can be accessed on address http://localhost:8080/docs.

OpenAPI

OpenAPI schema is available docs/openapi.json. It is possible to re-generate the document with schema by using:

make schema

When the RCS service is started OpenAPI schema is available on /openapi.json endpoint. For example, for service running on localhost on port 8080, it can be accessed and pretty printed by using following command:

curl 'http://127.0.0.1:8080/openapi.json' | jq .

Metrics

Service exposes metrics in Prometheus format on /metrics endpoint. Scraping them is straightforward:

curl 'http://127.0.0.1:8080/metrics'

Gradio UI

There is a minimal Gradio UI you can use when running the RCS server locally. To use it, it is needed to enable UI in rcsconfig.yaml file:

dev_config:
  enable_dev_ui: true

Then start the RCS server per Run the server and then browse to the built in Gradio interface at http://localhost:8080/ui

By default this interface will ask the RCS server to retain and use your conversation history for subsequent interactions. To disable this behavior, expand the Additional Inputs configuration at the bottom of the page and uncheck the Use history checkbox. When not using history each message you submit to RCS will be treated independently with no context of previous interactions.

Swagger UI

RCS API documentation is available at http://localhost:8080/docs

CPU profiling

To enable CPU profiling, please deploy your own pyroscope server and specify its URL in the devconfig as shown below. This will help RCS to send profiles to a specified endpoint.

dev_config:
  pyroscope_url: https://your-pyroscope-url.com

Memory profiling

To enable memory profiling, simply start the server with the below command.

make memray-run

Once you are done executing a few queries and want to look at the memory flamegraphs, please run the below command and it should spit out a html file for us.

make memray-flamegraph

Deploying RCS on OpenShift

A Helm chart is available for installing the service in OpenShift.

Before installing the chart, you must configure the auth.key parameter in the Values file

To install the chart with the release name ols-release in the namespace openshift-lightspeed:

helm upgrade --install ols-release helm/ --create-namespace --namespace openshift-lightspeed

The command deploys the service in the default configuration.

The default configuration contains RCS fronting with a kube-rbac-proxy.

To uninstall/delete the chart with the release name ols-release:

helm delete ols-release --namespace openshift-lightspeed

Chart customization is available using the Values file.

Project structure

  1. REST API handlers
  2. Configuration loader
  3. LLM providers registry
  4. LLM loader
  5. Interface to LLM providers
  6. Doc retriever from vector storage
  7. Question validator
  8. Docs summarizer
  9. Conversation cache
  10. (Local) Web-based user interface

Overall architecture

Overall architecture with all main parts is displayed below:

Architecture diagram

Road Core Service service is based on the FastAPI framework (Uvicorn) with Langchain for LLM interactions. The service is split into several parts described below.

FastAPI server

Handles REST API requests from clients (mainly from UI console, but can be any REST API-compatible tool), handles requests queue, and also exports Prometheus metrics. The Uvicorn framework is used as a FastAPI implementation.

Authorization checker

Manages authentication flow for REST API endpoints. Currently K8S/OCL-based authorization is used, but in the future it will be implemented in a more modular way to allow registering other auth. checkers.

Query handler

Retrieves user queries, validates them, redacts them, calls LLM, and summarizes feedback.

Redactor

Redacts the question based on the regex filters provided in the configuration file.

Question validator

Validates questions and provides one-word responses. It is an optional component.

Document summarizer

Summarizes documentation context.

Conversation history cache interface

Unified interface used to store and retrieve conversation history with optionally defined maximum length.

Conversation history cache implementations

Currently there exist three conversation history cache implementations:

  1. in-memory cache
  2. Redis cache
  3. Postgres cache

Entries stored in cache have compound keys that consist of user_id and conversation_id. It is possible for one user to have multiple conversations and thus multiple conversation_id values at the same time. Global cache capacity can be specified. The capacity is measured as the number of entries; entries sizes are ignored in this computation.

In-memory cache

In-memory cache is implemented as a queue with a defined maximum capacity specified as the number of entries that can be stored in a cache. That number is the limit for all cache entries, it doesn't matter how many users are using the LLM. When the new entry is put into the cache and if the maximum capacity is reached, the oldest entry is removed from the cache.

Redis cache

Entries are stored in Redis as a dictionary. LRU policy can be specified that allows Redis to automatically remove the oldest entries.

Postgres cache

Entries are stored in one Postgres table with the following schema:

     Column      |            Type             | Nullable | Default | Storage  |
-----------------+-----------------------------+----------+---------+----------+
 user_id         | text                        | not null |         | extended |
 conversation_id | text                        | not null |         | extended |
 value           | bytea                       |          |         | extended |
 updated_at      | timestamp without time zone |          |         | plain    |
Indexes:
    "cache_pkey" PRIMARY KEY, btree (user_id, conversation_id)
    "cache_key_key" UNIQUE CONSTRAINT, btree (key)
    "timestamps" btree (updated_at)
Access method: heap

During a new record insertion the maximum number of entries is checked and when the defined capacity is reached, the oldest entry is deleted.

LLM providers registry

Manages LLM providers implementations. If a new LLM provider type needs to be added, it is registered by this machinery and its libraries are loaded to be used later.

LLM providers interface implementations

Currently there exist the following LLM providers implementations:

  1. OpenAI
  2. Azure OpenAI
  3. RHEL AI
  4. OpenShift AI
  5. WatsonX
  6. BAM
  7. Fake provider (to be used by tests and benchmarks)

Sequence diagram

Sequence of operations performed when user asks a question:

Sequence diagram

Token truncation algorithm

The context window size is limited for all supported LLMs which means that token truncation algorithm needs to be performed for longer queries, queries with long conversation history etc. Current truncation logic/context window token check:

  1. Tokens for current prompt system instruction + user query + attachment (if any) + tokens reserved for response (default 512) should not be greater than model context window size, otherwise RCS will raise an error.
  2. Let’s say above tokens count as default tokens that will be used all the time. If any token is left after default usage then RAG context will be used completely or truncated depending upon how much tokens are left.
  3. Finally if we have further available tokens after using complete RAG context, then history will be used (or will be truncated)
  4. There is a flag set to True by the service, if history is truncated due to tokens limitation.

Token truncation

New pdm commands available in project repository

╭───────────────────────────────────┬──────┬────────────────────────────────────────────────╮
│ Name                              │ Type │ Description                                    │
├───────────────────────────────────┼──────┼────────────────────────────────────────────────┤
│ benchmarks                        │ cmd  │ pdm run make benchmarks                        │
│ check-types                       │ cmd  │ pdm run make check-types                       │
│ coverage-report                   │ cmd  │ pdm run make coverage-report                   │
│ generate-schema                   │ cmd  │ pdm run make schema                            │
│ integration-tests-coverage-report │ cmd  │ pdm run make integration-tests-coverage-report │
│ requirements                      │ cmd  │ pdm run make requirements.txt                  │
│ security-check                    │ cmd  │ pdm run make security-check                    │
│ start                             │ cmd  │ pdm run make run                               │
│ test                              │ cmd  │ pdm run make test                              │
│ test-e2e                          │ cmd  │ pdm run make test-e2e                          │
│ test-integration                  │ cmd  │ pdm run make test-integration                  │
│ test-unit                         │ cmd  │ pdm run make test-unit                         │
│ unit-tests-coverage-report        │ cmd  │ pdm run make unit-tests-coverage-report        │
│ version                           │ cmd  │ pdm run make print-version                     │
╰───────────────────────────────────┴──────┴────────────────────────────────────────────────╯

Making a package with Road Core Service

The Road Core Service repository contains all the necessary files needed to create a Python package and push this package into Python packages registry.

Create distribution archives

Distribution archives can be generated by following command:

make distribution-archives

This command should create a subdirectory named dist with two archives containing the source package and the Python wheel package:

road_core-0.2.1-py3-none-any.whl
road_core-0.2.1.tar.gz

Retrieve API token for PyPI

To upload the package into the Python package registry you’ll need a PyPI API token. Create one at https://test.pypi.org/manage/account/#api-tokens, setting the “Scope” to “Entire account”. Don’t close the page until you have copied and saved the token — you won’t see that token again.

Upload distribution archives with package into Python registry

Then run the following command:

make upload-distribution-archives

The new package release should be visible on page: https://test.pypi.org/project/road-core/

Additional tools

Utility to generate OpenAPI schema

This script re-generated OpenAPI schema for the Lightspeed Service REST API.

Path

scripts/generate_openapi_schema.py

Usage

pdm generate-schema`

Utility to generate requirements.* files

Generate list of packages to be prefetched in Cachi2 and used in Konflux for hermetic build.

This script performs several steps:

  1. removes torch+cpu dependency from project file
  2. generates requirements.txt file from pyproject.toml + pdm.lock
  3. removes all torch dependencies (including CUDA/Nvidia packages)
  4. downloads torch+cpu wheel
  5. computes hashes for this wheel
  6. adds the URL to wheel + hash to resulting requirements.txt file
  7. downloads script pip_find_builddeps from the Cachito project
  8. generated requirements-build.in file
  9. compiles requirements-build.in file into requirements-build.txt file

Please note that this script depends on tool that is downloaded from repository containing Cachito system. This tool is run locally w/o any additional security checks etc. so some care is needed (run this script from within containerized environment etc.).

Path

scripts/generate_packages_to_prefetch.py

Usage

usage: generate_packages_to_prefetch.py [-h] [-p]

options:
  -h, --help            show this help message and exit
  -p, --process-special-packages
                        Enable or disable processing special packages like torch etc.
  -c, --cleanup         Enable or disable work directory cleanup
  -w WORK_DIRECTORY, --work-directory WORK_DIRECTORY
                        Work directory to store files generated during different stages
                        of processing

Known issue

When SQLAlchemy package is not locked to latest version in pyproject.toml and pdm.lock, this script will fail due to issue in pip. To fix this issue it is needed to follow those steps:

  1. Look at https://pypi.org/project/SQLAlchemy/ to retrieve latest SQLAlchemy version
  2. Update pyproject.toml file accordingly using SQLAlchemy=={latest_version}
  3. Run pdm update sqlalchemy

Uploading artifact containing the pytest results and configuration to an s3 bucket.

Path

scripts/upload_artifact_s3.py

Usage

A dictionary containing the credentials of the S3 bucket must be specified, containing the keys:

  • AWS_BUCKET
  • AWS_REGION
  • AWS_ACCESS_KEY_ID
  • AWS_SECRET_ACCESS_KEY

Contributing

License

Published under the Apache 2.0 License

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages