Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: forward scheduling example #35

Merged
merged 13 commits into from
Jul 31, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/workflows/codeql-analysis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ name: "CodeQL"

on:
schedule:
- cron: '0 0 27 1 *' # Runs at 00:00 UTC on the 27th of January
- cron: '0 0 29 2 *' # Runs at 00:00 UTC on the 29th of Feb

jobs:
analyze:
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/conda_release.yml
Original file line number Diff line number Diff line change
Expand Up @@ -44,4 +44,4 @@ jobs:
conda-build .github/conda

- name: Upload to Anaconda
run: anaconda upload `conda-build .github/conda --output` --force
run: anaconda upload `conda-build .github/conda --output` --force
8 changes: 5 additions & 3 deletions .github/workflows/test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ jobs:
strategy:
matrix:
os: [ubuntu-latest, macOS-latest] # add windows-2019 when poetry allows installation with `-f` flag
python-version: [3.8, 3.9] # python 3.9 is not supported by all dependencies yet
python-version: [3.8, '3.10'] # python 3.9 is not supported by all dependencies yet

steps:
- uses: actions/checkout@v2
Expand Down Expand Up @@ -55,11 +55,13 @@ jobs:

- name: Upgrade pip
shell: bash
run: poetry run python -m pip install pip -U
run: pip install --upgrade pip

- name: Install dependencies
shell: bash
run: poetry install --no-interaction --no-root
run: |
pip install --no-cache-dir -r requirements.txt
pip install --extra-index-url https://pypi.org/simple --no-cache-dir coverage pytest codecov-cli>=0.4.1

- name: Run unittest
shell: bash
Expand Down
6 changes: 2 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@
[codeql-url]: https://github.com/longxingtan/python-lekin/actions/workflows/codeql-analysis.yml

<h1 align="center">
<img src="./docs/source/_static/logo.svg" width="490" align=center/>
<img src="./docs/source/_static/logo.svg" width="400" align=center/>
</h1><br>

[![LICENSE][license-image]][license-url]
Expand All @@ -26,13 +26,11 @@
[![Lint Status][lint-image]][lint-url]
[![Docs Status][docs-image]][docs-url]
[![Code Coverage][coverage-image]][coverage-url]
[![CodeQL Status][codeql-image]][codeql-url]

**[Documentation](https://python-lekin.readthedocs.io)** | **[Tutorials](https://python-lekin.readthedocs.io/en/latest/tutorials.html)** | **[Release Notes](https://python-lekin.readthedocs.io/en/latest/CHANGELOG.html)** | **[中文](https://github.com/LongxingTan/python-lekin/blob/master/README_zh_CN.md)**

**python-lekin** is a rapid-to-implement and easy-to-use Flexible Job Shop Scheduler Library, named after and inspired by [Lekin](https://web-static.stern.nyu.edu/om/software/lekin/). As a core function in **APS (advanced planning and scheduler)**, it helps manufacturers optimize the allocation of materials and production capacity optimally to balance demand and capacity.

- accelerate by
- Changeover Optimization
- Ready for demo, research and maybe production

Expand All @@ -55,7 +53,7 @@
**Installation**

``` shell
$ pip install lekin
pip install lekin
```

**Usage**
Expand Down
2 changes: 1 addition & 1 deletion README_zh_CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
[coverage-url]: https://codecov.io/github/longxingtan/python-lekin?branch=master

<h1 align="center">
<img src="./docs/source/_static/logo.svg" width="490" align=center/>
<img src="./docs/source/_static/logo.svg" width="400" align=center/>
</h1><br>

[![LICENSE][license-image]][license-url]
Expand Down
2 changes: 2 additions & 0 deletions docs/requirements_docs.txt
Original file line number Diff line number Diff line change
Expand Up @@ -11,3 +11,5 @@ sphinx-autobuild

pandas
numpy
ortools
tensorflow
13 changes: 0 additions & 13 deletions docs/source/api.rst

This file was deleted.

59 changes: 30 additions & 29 deletions docs/source/application.rst
Original file line number Diff line number Diff line change
Expand Up @@ -78,10 +78,10 @@ Demand represents customer orders to fulfill
5、根据机器k的空闲开始时间以及任务状态检索任务,存储为任务列表R;
6、判断任务列表R是否为空,是则k=k+1,返回步骤五,否则进行下一步;
7、根据任务的最早可加工时间进行排序,选择最早开始的任务进行加工,更新机器状态、任务状态及后续的工序状态;
·确定任务的开工时间及结束时间
·更新机器的释放时间
·更新当前任务的状态、开工时间、完工时间
·更新当前任务后续节点的最早开工时间,若当前任务为产品的最后一个工序则无须更新
- 确定任务的开工时间及结束时间
- 更新机器的释放时间
- 更新当前任务的状态、开工时间、完工时间
- 更新当前任务后续节点的最早开工时间,若当前任务为产品的最后一个工序则无须更新
8、判断所有任务是否均已完工,是则结束,否则返回步骤四。

解决冲突的过程,即是一个顺排的过程。把所有分布在该资源上的任务根据顺序进行顺排
Expand All @@ -97,31 +97,33 @@ Demand represents customer orders to fulfill


再排锁定任务
- 只倒排,如果倒排不可行则返回错误。按优先级
- 非关键工序: 按lead_time(倒排lead_time,顺排lag_time)、节拍往前排,不考虑齐套时间。
- 关键工序: 如有锁定资源,则按资源情况进行编排。不考虑齐套时间,因为锁定任务是为了确保优先级,物料通过缺料去人工追料
- 共享工序: 共享工序本身取多个job中靠前的时间断。
- 原本已经排过的其他job,前序工序需要以此为新的往前推
- 同时拥有共享工序的job排序适时进行调整,尽量避免以上修改
- 在冻结期且物料约束不满足: 按最早齐套时间进行排产,同时已排工序进行

只倒排,如果倒排不可行则返回错误。按优先级
- 非关键工序: 按lead_time(倒排lead_time,顺排lag_time)、节拍往前排,不考虑齐套时间。
- 关键工序: 如有锁定资源,则按资源情况进行编排。不考虑齐套时间,因为锁定任务是为了确保优先级,物料通过缺料去人工追料
- 共享工序: 共享工序本身取多个job中靠前的时间断。
- 原本已经排过的其他job,前序工序需要以此为新的往前推
- 同时拥有共享工序的job排序适时进行调整,尽量避免以上修改
- 在冻结期且物料约束不满足: 按最早齐套时间进行排产,同时已排工序进行


再根据优先级排产其他任务
- 先倒排
- 第一道关键工序前的非关键工序,先按lead time进行排,后面需要二次更新。
- 关键工序倒排,(task分割会有的)非关键工序则按lead time前推
- 倒排中时间约束都是最晚时间,但物料约束是最早开始时间。如果时间不足以排产,则该工序及之后的工序都转为顺排重排
- 另一个思路是,先直接道排到第一个工序,后续一起按照可开始时间进行顺排。甚至等到推紧时一起顺排是否可行呢
- 遇到共享task,如果之前的共享task被安排的时间更晚,那么剩余工序也转为顺排重排
- 第一步剩余的非关键工序后拉
- 如果任务一步中,资源日历不足则进入下一步
- 倒排有问题则顺排
- 按各个资源最早可用日期开始排 (此时应该选可以最早的资源),非关键工序按lead time排,并需要进行二次更新
- 关键工序顺排
- 如果遇到共享task不满足时间约束
- 第一步剩余的非关键工序前拉
- 如果任何一步中,资源日历不足则返回错误
- 关键工序完全没有设置的job,按无限产能倒排

先倒排
- 第一道关键工序前的非关键工序,先按lead time进行排,后面需要二次更新。
- 关键工序倒排,(task分割会有的)非关键工序则按lead time前推
- 倒排中时间约束都是最晚时间,但物料约束是最早开始时间。如果时间不足以排产,则该工序及之后的工序都转为顺排重排
- 另一个思路是,先直接道排到第一个工序,后续一起按照可开始时间进行顺排。甚至等到推紧时一起顺排是否可行呢
- 遇到共享task,如果之前的共享task被安排的时间更晚,那么剩余工序也转为顺排重排
- 第一步剩余的非关键工序后拉
- 如果任务一步中,资源日历不足则进入下一步

倒排有问题则顺排
- 按各个资源最早可用日期开始排 (此时应该选可以最早的资源),非关键工序按lead time排,并需要进行二次更新
- 关键工序顺排. 如果遇到共享task不满足时间约束
- 第一步剩余的非关键工序前拉
- 如果任何一步中,资源日历不足则返回错误
关键工序完全没有设置的job,按无限产能倒排


任务推紧规整
Expand All @@ -140,7 +142,7 @@ Demand represents customer orders to fulfill


车间过程2-倒排+顺排2
-------------------
-------------------------
仍然是先排锁定任务

把所有任务按照交期和最早开工日期进行倒排或顺排,不考虑资源的约束本身 【带来的问题是:资源优先级的选择】
Expand All @@ -150,9 +152,8 @@ Demand represents customer orders to fulfill
- 每一个job都按第一道工序其最早开工日期开始,



车间过程3-按资源增量排产
---------------------
---------------------------
输入: 排产任务(MO+计划单)
输出: 各工序的排产资源与结果
1. 筛选出主工单与部件工单,建立子部件的属性联系
Expand Down
4 changes: 2 additions & 2 deletions docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -137,8 +137,8 @@ def setup(app: Sphinx):


# autosummary
autosummary_generate = True
shutil.rmtree(SOURCE_PATH.joinpath("api"), ignore_errors=True)
# autosummary_generate = True
# shutil.rmtree(SOURCE_PATH.joinpath("api"), ignore_errors=True)


# copy changelog
Expand Down
1 change: 0 additions & 1 deletion docs/source/demand.rst
Original file line number Diff line number Diff line change
@@ -1,3 +1,2 @@
demand
========================================

5 changes: 2 additions & 3 deletions docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ python-lekin documentation


车间排产快速入门
---------------
-----------------------

排产是一个分配任务,将有限的资源分配给需求。因此需求需要有优先级,约束主要有产能约束与物料约束。产能约束,将订单中的成品按工艺路线分解为工序,而每一道工序有对应的生产机器;物料约束,将订单的成品按BOM(bill of materials)展开为原材料需求,每一道工序开始前需要对应原材料齐套。

Expand Down Expand Up @@ -45,12 +45,11 @@ Finite Capacity Planning
heuristics
application
demand
api
GitHub <https://github.com/LongxingTan/python-lekin>


Indices and tables
==================
=========================

* :ref:`genindex`
* :ref:`modindex`
Expand Down
7 changes: 5 additions & 2 deletions docs/source/rules.rst
Original file line number Diff line number Diff line change
Expand Up @@ -46,11 +46,13 @@ SPT—EDD规则
顺排对下一道工序的约束是:最早开始时间

.. code-block:: python
backward(operations, next_op_start_until, with_material_kitting_constraint, align_with_same_production_line, latest_start_time, latest_end_time) -> remaining_operations: list[operations],

backward(operations, next_op_start_until, with_material_kitting_constraint, align_with_same_production_line, latest_start_time, latest_end_time)


.. code-block:: python
assign_op(operation, is_critical, direction: str, ) -> chosen_resource, chosen_production_id, chosen_hours,

assign_op(operation, is_critical, direction: str)

在顺排中,排的比较紧密的资源往往就是瓶颈资源。

Expand All @@ -68,4 +70,5 @@ SPT—EDD规则


.. code-block:: python

forward(operations, next_op_start_until, with_material_kitting_constraint, align_with_same_production_line, earliest_start_time, earliest_end_time)
9 changes: 4 additions & 5 deletions examples/genetic_example.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,6 @@
Route,
RouteCollector,
)
from lekin.solver.construction_heuristics import EPSTScheduler, LPSTScheduler
from lekin.solver.meta_heuristics.genetic import GeneticScheduler

logging.basicConfig(format="%(levelname)s:%(message)s", level=logging.DEBUG)
Expand Down Expand Up @@ -52,7 +51,6 @@ def __init__(self):
"""==================== main code ==============================="""
"""----- generate initial population -----"""
Tbest = 999999999999999
best_list, best_obj = [], []
population_list = []
makespan_record = []
for i in range(population_size):
Expand Down Expand Up @@ -96,10 +94,11 @@ def __init__(self):
"""----------repairment-------------"""
for m in range(population_size):
job_count = {}
# 'larger' record jobs appear in the chromosome more than m times, and 'less' records less than m times
larger, less = (
[],
[],
) # 'larger' record jobs appear in the chromosome more than m times, and 'less' records less than m times.
)
for i in range(num_jobs):
if i in offspring_list[m]:
count = offspring_list[m].count(i)
Expand Down Expand Up @@ -140,7 +139,7 @@ def __init__(self):
m_chg[num_mutation_jobs - 1]
] = t_value_last # move the value of the first mutation position to the last mutation position

"""--------fitness value(calculate makespan)-------------"""
"""--------fitness value(calculate make span)-------------"""
total_chromosome = copy.deepcopy(parent_list) + copy.deepcopy(
offspring_list
) # parent and offspring chromosomes combination
Expand Down Expand Up @@ -221,13 +220,13 @@ def __init__(self):
)

def run(self):
start_time = time.time()
while True:
np.random.seed(int(time.time()))


def main():
data_reader = DataReader()
print(data_reader)

# job_collector = data_reader.get_job_collector()
# resource_collector = data_reader.get_resource_collector()
Expand Down
18 changes: 5 additions & 13 deletions examples/rule_example.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,17 +2,8 @@
import logging

from lekin.dashboard.gantt import get_scheduling_res_from_all_jobs, plot_gantt_chart
from lekin.lekin_struct import (
Job,
JobCollector,
Operation,
OperationCollector,
Resource,
ResourceCollector,
Route,
RouteCollector,
)
from lekin.solver.construction_heuristics import ForwardScheduler, BackwardScheduler
from lekin.lekin_struct import Job, JobCollector, Operation, Resource, ResourceCollector, Route, RouteCollector
from lekin.solver.construction_heuristics import BackwardScheduler, ForwardScheduler

logging.basicConfig(format="%(levelname)s:%(message)s", level=logging.DEBUG)

Expand All @@ -34,10 +25,11 @@ def prepare_data(file_path="./data/k1.json"):
re_name = re["machineName"]
re_id = int(re_name.replace("M", ""))
resource = Resource(resource_id=re_id, resource_name=re_name)
resource.available_hours = list(range(1, 100))
resource_collector.add_resource_dict(resource)

# print([i.resource_id for i in resource_collector.get_all_resources()])
# print(resource_collector.get_all_resources()[0].available_hours)
print([i.resource_id for i in resource_collector.get_all_resources()])
print(resource_collector.get_all_resources()[0].available_hours)

# parse the job and route
for ro in routes:
Expand Down
Empty file removed lekin/forecast/__init__.py
Empty file.
2 changes: 1 addition & 1 deletion lekin/solver/construction_heuristics/__init__.py
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
"""Dispatching rules"""

from lekin.solver.construction_heuristics.atcs import ATCScheduler
from lekin.solver.construction_heuristics.forward import ForwardScheduler
from lekin.solver.construction_heuristics.backward import BackwardScheduler
from lekin.solver.construction_heuristics.forward import ForwardScheduler
from lekin.solver.construction_heuristics.spt import SPTScheduler

__all__ = [ATCScheduler, ForwardScheduler, SPTScheduler, BackwardScheduler]
Expand Down
36 changes: 18 additions & 18 deletions lekin/solver/meta_heuristics/genetic.py
Original file line number Diff line number Diff line change
Expand Up @@ -82,7 +82,7 @@ def _init_ms(self):
# ms_sequence: A list of resource IDs representing the machine sequence
return

def _init_os(self):
def _init_os(self, jobs, resources):
# os_sequence: A list of operation IDs representing the operation sequence.
os_sequence = []
ms_sequence = []
Expand Down Expand Up @@ -127,20 +127,20 @@ def mutation(self, chromosome):
mutated_chromosome = 0
return mutated_chromosome

def decode(self):
scheduling_result = SchedulingResult()
resource_availability = {
res.resource_id: 0 for res in resources
} # Tracks next available time for each resource

for op_id, res_id in zip(os_sequence, ms_sequence):
op = operations[op_id]
resource_ready_time = resource_availability[res_id]
start_time = max(resource_ready_time, op.earliest_start)
end_time = start_time + op.processing_time

# Update the schedule and resource availability
scheduling_result.schedule[op_id] = (res_id, start_time, end_time)
resource_availability[res_id] = end_time

return scheduling_result
# def decode(self):
# scheduling_result = SchedulingResult()
# resource_availability = {
# res.resource_id: 0 for res in resources
# } # Tracks next available time for each resource
#
# for op_id, res_id in zip(os_sequence, ms_sequence):
# op = operations[op_id]
# resource_ready_time = resource_availability[res_id]
# start_time = max(resource_ready_time, op.earliest_start)
# end_time = start_time + op.processing_time
#
# # Update the schedule and resource availability
# scheduling_result.schedule[op_id] = (res_id, start_time, end_time)
# resource_availability[res_id] = end_time
#
# return scheduling_result
Loading
Loading