Skip to content

Commit

Permalink
Merge pull request #169 from reportportal/EPMRPP-90168_train_rework
Browse files Browse the repository at this point in the history
[EPMRPP-90168] train rework
  • Loading branch information
HardNorth authored Aug 16, 2024
2 parents eedbbc8 + d42916b commit 004424b
Show file tree
Hide file tree
Showing 192 changed files with 5,224 additions and 4,579 deletions.
1 change: 1 addition & 0 deletions .flake8
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
[flake8]
ignore = E741, W503
exclude = .git,venv,env,fixtures
max-line-length = 119
1 change: 1 addition & 0 deletions .github/workflows/tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@ on: [ push, pull_request ]
jobs:
build:
runs-on: ubuntu-latest
timeout-minutes: 20
steps:
- name: Checkout repository
uses: actions/checkout@v4
Expand Down
6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,9 +58,9 @@
| PATTERN_LABEL_MIN_PERCENT | float | 0.9 | the value of minimum percent of the same issue type for pattern to be suggested as a pattern with a label |
| PATTERN_LABEL_MIN_COUNT | integer | 5 | the value of minimum count of pattern occurrence to be suggested as a pattern with a label |
| PATTERN_MIN_COUNT | integer | 10 | the value of minimum count of pattern occurrence to be suggested as a pattern without a label |
| MAX_LOGS_FOR_DEFECT_TYPE_MODEL | integer | 10000 | the value of maximum count of logs per defect type to add into defect type model training. Default value is chosen in cosideration of having space for analyzer_train docker image setuo of 1GB, if you can give more GB you can linearly allow more logs to be considered. |
| PROB_CUSTOM_MODEL_SUGGESTIONS | float | 0.7 | the probability of custom retrained model to be used for running when suggestions are requested. The maximum value is 0.8, because we want at least 20% of requests to process with a global model not to overfit for project too much. The bigger the value of this env varibale the more often custom retrained model will be used. |
| PROB_CUSTOM_MODEL_AUTO_ANALYSIS | float | 0.5 | the probability of custom retrained model to be used for running when auto-analysis is performed. The maximum value is 1.0. The bigger the value of this env varibale the more often custom retrained model will be used. |
| MAX_LOGS_FOR_DEFECT_TYPE_MODEL | integer | 10000 | the value of maximum count of logs per defect type to add into defect type model training. Default value is chosen in consideration of having space for analyzer_train docker image setuo of 1GB, if you can give more GB you can linearly allow more logs to be considered. |
| PROB_CUSTOM_MODEL_SUGGESTIONS | float | 0.7 | the probability of custom retrained model to be used for running when suggestions are requested. The maximum value is 0.8, because we want at least 20% of requests to process with a global model not to overfit for project too much. The bigger the value of this env variable the more often custom retrained model will be used. |
| PROB_CUSTOM_MODEL_AUTO_ANALYSIS | float | 0.5 | the probability of custom retrained model to be used for running when auto-analysis is performed. The maximum value is 1.0. The bigger the value of this env variable the more often custom retrained model will be used. |
| MAX_SUGGESTIONS_NUMBER | integer | 3 | the maximum number of suggestions shown in the ML suggestions area in the defect type editor. |

## Instructions for analyzer setup without Docker
Expand Down
29 changes: 22 additions & 7 deletions app/amqp/amqp_handler.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,12 +14,13 @@

import json
import uuid
from typing import Callable, Any
from typing import Callable, Any, Optional

from pika.adapters.blocking_connection import BlockingChannel
from pika.spec import Basic, BasicProperties

from app.commons import launch_objects, logging
from app.commons import logging
from app.commons.model import launch_objects, ml

logger = logging.getLogger("analyzerApp.amqpHandler")

Expand Down Expand Up @@ -54,11 +55,16 @@ def prepare_delete_index(body: Any) -> int:
return int(body)


def prepare_test_item_info(test_item_info: Any) -> Any:
def prepare_test_item_info(test_item_info: Any) -> launch_objects.TestItemInfo:
"""Function for deserializing test item info for suggestions"""
return launch_objects.TestItemInfo(**test_item_info)


def prepare_train_info(train_info: dict) -> ml.TrainInfo:
"""Function for deserializing train info object"""
return ml.TrainInfo(**train_info)


def prepare_search_response_data(response: list | dict) -> str:
"""Function for serializing response from search request"""
return json.dumps(response)
Expand Down Expand Up @@ -121,9 +127,10 @@ def handle_amqp_request(channel: BlockingChannel, method: Basic.Deliver, props:
if publish_result:
try:
if props.reply_to:
channel.basic_publish(exchange='', routing_key=props.reply_to, properties=BasicProperties(
correlation_id=props.correlation_id, content_type='application/json'), mandatory=False,
body=bytes(response_body, 'utf-8'))
channel.basic_publish(
exchange='', routing_key=props.reply_to,
properties=BasicProperties(correlation_id=props.correlation_id, content_type='application/json'),
mandatory=False, body=bytes(response_body, 'utf-8'))
except Exception as exc:
logger.error('Failed to publish result')
logger.exception(exc)
Expand All @@ -132,7 +139,8 @@ def handle_amqp_request(channel: BlockingChannel, method: Basic.Deliver, props:


def handle_inner_amqp_request(_: BlockingChannel, method: Basic.Deliver, props: BasicProperties, body: bytes,
request_handler: Callable[[Any], Any]):
request_handler: Callable[[Any], Any],
prepare_data_func: Optional[Callable[[Any], Any]] = None):
"""Function for handling inner amqp requests."""
logging.new_correlation_id()
logger.debug(f'Started inner message processing.\n--Method: {method}\n'
Expand All @@ -143,6 +151,13 @@ def handle_inner_amqp_request(_: BlockingChannel, method: Basic.Deliver, props:
logger.error('Failed to parse message body to JSON')
logger.exception(exc)
return
if prepare_data_func:
try:
message = prepare_data_func(message)
except Exception as exc:
logger.error('Failed to prepare message body')
logger.exception(exc)
return
try:
request_handler(message)
except Exception as exc:
Expand Down
Loading

0 comments on commit 004424b

Please sign in to comment.