Files
Uni-Lab-OS/unilabos/device_comms/modbus_plc/client.py
Xuwznln 9aeffebde1 0.10.7 Update (#101)
* Cleanup registry to be easy-understanding (#76)

* delete deprecated mock devices

* rename categories

* combine chromatographic devices

* rename rviz simulation nodes

* organic virtual devices

* parse vessel_id

* run registry completion before merge

---------

Co-authored-by: Xuwznln <18435084+Xuwznln@users.noreply.github.com>

* fix: workstation handlers and vessel_id parsing

* fix: working dir error when input config path
feat: report publish topic when error

* modify default discovery_interval to 15s

* feat: add trace log level

* feat: 添加ChinWe设备控制类,支持串口通信和电机控制功能 (#79)

* fix: drop_tips not using auto resource select

* fix: discard_tips error

* fix: discard_tips

* fix: prcxi_res

* add: prcxi res
fix: startup slow

* feat: workstation example

* fix pumps and liquid_handler handle

* feat: 优化protocol node节点运行日志

* fix all protocol_compilers and remove deprecated devices

* feat: 新增use_remote_resource参数

* fix and remove redundant info

* bugfixes on organic protocols

* fix filter protocol

* fix protocol node

* 临时兼容错误的driver写法

* fix: prcxi import error

* use call_async in all service to avoid deadlock

* fix: figure_resource

* Update recipe.yaml

* add workstation template and battery example

* feat: add sk & ak

* update workstation base

* Create workstation_architecture.md

* refactor: workstation_base 重构为仅含业务逻辑,通信和子设备管理交给 ProtocolNode

* refactor: ProtocolNode→WorkstationNode

* Add:msgs.action (#83)

* update: Workstation dev 将版本号从 0.10.3 更新为 0.10.4 (#84)

* Add:msgs.action

* update: 将版本号从 0.10.3 更新为 0.10.4

* simplify resource system

* uncompleted refactor

* example for use WorkstationBase

* feat: websocket

* feat: websocket test

* feat: workstation example

* feat: action status

* fix: station自己的方法注册错误

* fix: 还原protocol node处理方法

* fix: build

* fix: missing job_id key

* ws test version 1

* ws test version 2

* ws protocol

* 增加物料关系上传日志

* 增加物料关系上传日志

* 修正物料关系上传

* 修复工站的tracker实例追踪失效问题

* 增加handle检测,增加material edge关系上传

* 修复event loop错误

* 修复edge上报错误

* 修复async错误

* 更新schema的title字段

* 主机节点信息等支持自动刷新

* 注册表编辑器

* 修复status密集发送时,消息出错

* 增加addr参数

* fix: addr param

* fix: addr param

* 取消labid 和 强制config输入

* Add action definitions for LiquidHandlerSetGroup and LiquidHandlerTransferGroup

- Created LiquidHandlerSetGroup.action with fields for group name, wells, and volumes.
- Created LiquidHandlerTransferGroup.action with fields for source and target group names and unit volume.
- Both actions include response fields for return information and success status.

* Add LiquidHandlerSetGroup and LiquidHandlerTransferGroup actions to CMakeLists

* Add set_group and transfer_group methods to PRCXI9300Handler and update liquid_handler.yaml

* result_info改为字典类型

* 新增uat的地址替换

* runze multiple pump support

(cherry picked from commit 49354fcf39)

* remove runze multiple software obtainer

(cherry picked from commit 8bcc92a394)

* support multiple backbone

(cherry picked from commit 4771ff2347)

* Update runze pump format

* Correct runze multiple backbone

* Update runze_multiple_backbone

* Correct runze pump multiple receive method.

* Correct runze pump multiple receive method.

* 对于PRCXI9320的transfer_group,一对多和多对多

* 移除MQTT,更新launch文档,提供注册表示例文件,更新到0.10.5

* fix import error

* fix dupe upload registry

* refactor ws client

* add server timeout

* Fix: run-column with correct vessel id (#86)

* fix run_column

* Update run_column_protocol.py

(cherry picked from commit e5aa4d940a)

* resource_update use resource_add

* 新增版位推荐功能

* 重新规定了版位推荐的入参

* update registry with nested obj

* fix protocol node log_message, added create_resource return value

* fix protocol node log_message, added create_resource return value

* try fix add protocol

* fix resource_add

* 修复移液站错误的aspirate注册表

* Feature/xprbalance-zhida (#80)

* feat(devices): add Zhida GC/MS pretreatment automation workstation

* feat(devices): add mettler_toledo xpr balance

* balance

* 重新补全zhida注册表

* PRCXI9320 json

* PRCXI9320 json

* PRCXI9320 json

* fix resource download

* remove class for resource

* bump version to 0.10.6

* 更新所有注册表

* 修复protocolnode的兼容性

* 修复protocolnode的兼容性

* Update install md

* Add Defaultlayout

* 更新物料接口

* fix dict to tree/nested-dict converter

* coin_cell_station draft

* refactor: rename "station_resource" to "deck"

* add standardized BIOYOND resources: bottle_carrier, bottle

* refactor and add BIOYOND resources tests

* add BIOYOND deck assignment and pass all tests

* fix: update resource with correct structure; remove deprecated liquid_handler set_group action

* feat: 将新威电池测试系统驱动与配置文件并入 workstation_dev_YB2 (#92)

* feat: 新威电池测试系统驱动与注册文件

* feat: bring neware driver & battery.json into workstation_dev_YB2

* add bioyond studio draft

* bioyond station with communication init and resource sync

* fix bioyond station and registry

* fix: update resource with correct structure; remove deprecated liquid_handler set_group action

* frontend_docs

* create/update resources with POST/PUT for big amount/ small amount data

* create/update resources with POST/PUT for big amount/ small amount data

* refactor: add itemized_carrier instead of carrier consists of ResourceHolder

* create warehouse by factory func

* update bioyond launch json

* add child_size for itemized_carrier

* fix bioyond resource io

* Workstation templates: Resources and its CRUD, and workstation tasks (#95)

* coin_cell_station draft

* refactor: rename "station_resource" to "deck"

* add standardized BIOYOND resources: bottle_carrier, bottle

* refactor and add BIOYOND resources tests

* add BIOYOND deck assignment and pass all tests

* fix: update resource with correct structure; remove deprecated liquid_handler set_group action

* feat: 将新威电池测试系统驱动与配置文件并入 workstation_dev_YB2 (#92)

* feat: 新威电池测试系统驱动与注册文件

* feat: bring neware driver & battery.json into workstation_dev_YB2

* add bioyond studio draft

* bioyond station with communication init and resource sync

* fix bioyond station and registry

* create/update resources with POST/PUT for big amount/ small amount data

* refactor: add itemized_carrier instead of carrier consists of ResourceHolder

* create warehouse by factory func

* update bioyond launch json

* add child_size for itemized_carrier

* fix bioyond resource io

---------

Co-authored-by: h840473807 <47357934+h840473807@users.noreply.github.com>
Co-authored-by: Xie Qiming <97236197+Andy6M@users.noreply.github.com>

* 更新物料接口

* Workstation dev yb2 (#100)

* Refactor and extend reaction station action messages

* Refactor dispensing station tasks to enhance parameter clarity and add batch processing capabilities

- Updated `create_90_10_vial_feeding_task` to include detailed parameters for 90%/10% vial feeding, improving clarity and usability.
- Introduced `create_batch_90_10_vial_feeding_task` for batch processing of 90%/10% vial feeding tasks with JSON formatted input.
- Added `create_batch_diamine_solution_task` for batch preparation of diamine solution, also utilizing JSON formatted input.
- Refined `create_diamine_solution_task` to include additional parameters for better task configuration.
- Enhanced schema descriptions and default values for improved user guidance.

* 修复to_plr_resources

* add update remove

* 支持选择器注册表自动生成
支持转运物料

* 修复资源添加

* 修复transfer_resource_to_another生成

* 更新transfer_resource_to_another参数,支持spot入参

* 新增test_resource动作

* fix host_node error

* fix host_node test_resource error

* fix host_node test_resource error

* 过滤本地动作

* 移动内部action以兼容host node

* 修复同步任务报错不显示的bug

* feat: 允许返回非本节点物料,后面可以通过decoration进行区分,就不进行warning了

* update todo

* modify bioyond/plr converter, bioyond resource registry, and tests

* pass the tests

* update todo

* add conda-pack-build.yml

* add auto install script for conda-pack-build.yml

(cherry picked from commit 172599adcf)

* update conda-pack-build.yml

* update conda-pack-build.yml

* update conda-pack-build.yml

* update conda-pack-build.yml

* update conda-pack-build.yml

* Add version in __init__.py
Update conda-pack-build.yml
Add create_zip_archive.py

* Update conda-pack-build.yml

* Update conda-pack-build.yml (with mamba)

* Update conda-pack-build.yml

* Fix FileNotFoundError

* Try fix 'charmap' codec can't encode characters in position 16-23: character maps to <undefined>

* Fix unilabos msgs search error

* Fix environment_check.py

* Update recipe.yaml

* Update registry. Update uuid loop figure method. Update install docs.

* Fix nested conda pack

* Fix one-key installation path error

* Bump version to 0.10.7

* Workshop bj (#99)

* Add LaiYu Liquid device integration and tests

Introduce LaiYu Liquid device implementation, including backend, controllers, drivers, configuration, and resource files. Add hardware connection, tip pickup, and simplified test scripts, as well as experiment and registry configuration for LaiYu Liquid. Documentation and .gitignore for the device are also included.

* feat(LaiYu_Liquid): 重构设备模块结构并添加硬件文档

refactor: 重新组织LaiYu_Liquid模块目录结构
docs: 添加SOPA移液器和步进电机控制指令文档
fix: 修正设备配置中的最大体积默认值
test: 新增工作台配置测试用例
chore: 删除过时的测试脚本和配置文件

* add

* 重构: 将 LaiYu_Liquid.py 重命名为 laiyu_liquid_main.py 并更新所有导入引用

- 使用 git mv 将 LaiYu_Liquid.py 重命名为 laiyu_liquid_main.py
- 更新所有相关文件中的导入引用
- 保持代码功能不变,仅改善命名一致性
- 测试确认所有导入正常工作

* 修复: 在 core/__init__.py 中添加 LaiYuLiquidBackend 导出

- 添加 LaiYuLiquidBackend 到导入列表
- 添加 LaiYuLiquidBackend 到 __all__ 导出列表
- 确保所有主要类都可以正确导入

* 修复大小写文件夹名字

* 电池装配工站二次开发教程(带目录)上传至dev (#94)

* 电池装配工站二次开发教程

* Update intro.md

* 物料教程

* 更新物料教程,json格式注释

* Update prcxi driver & fix transfer_liquid mix_times (#90)

* Update prcxi driver & fix transfer_liquid mix_times

* fix: correct mix_times type

* Update liquid_handler registry

* test: prcxi.py

* Update registry from pr

* fix ony-key script not exist

* clean files

---------

Co-authored-by: Junhan Chang <changjh@dp.tech>
Co-authored-by: ZiWei <131428629+ZiWei09@users.noreply.github.com>
Co-authored-by: Guangxin Zhang <guangxin.zhang.bio@gmail.com>
Co-authored-by: Xie Qiming <97236197+Andy6M@users.noreply.github.com>
Co-authored-by: h840473807 <47357934+h840473807@users.noreply.github.com>
Co-authored-by: LccLink <1951855008@qq.com>
Co-authored-by: lixinyu1011 <61094742+lixinyu1011@users.noreply.github.com>
Co-authored-by: shiyubo0410 <shiyubo@dp.tech>
2025-10-12 23:34:26 +08:00

538 lines
22 KiB
Python

import json
import time
import traceback
from typing import Any, Union, List, Dict, Callable, Optional, Tuple
from pydantic import BaseModel
from pymodbus.client import ModbusSerialClient, ModbusTcpClient
from pymodbus.framer import FramerType
from typing import TypedDict
from unilabos.device_comms.modbus_plc.modbus import DeviceType, HoldRegister, Coil, InputRegister, DiscreteInputs, DataType, WorderOrder
from unilabos.device_comms.modbus_plc.modbus import Base as ModbusNodeBase
from unilabos.device_comms.universal_driver import UniversalDriver
from unilabos.utils.log import logger
import pandas as pd
class ModbusNode(BaseModel):
name: str
device_type: DeviceType
address: int
data_type: DataType = DataType.INT16
slave: int = 1
class PLCWorkflow(BaseModel):
name: str
actions: List[
Union[
"PLCWorkflow",
Callable[
[Callable[[str], ModbusNodeBase]],
None
]]
]
class Action(BaseModel):
name: str
rw: bool # read是0 write是1
class WorkflowAction(BaseModel):
init: Optional[Callable[[Callable[[str], ModbusNodeBase]], bool]] = None
start: Optional[Callable[[Callable[[str], ModbusNodeBase]], bool]] = None
stop: Optional[Callable[[Callable[[str], ModbusNodeBase]], bool]] = None
cleanup: Optional[Callable[[Callable[[str], ModbusNodeBase]], None]] = None
class ModbusWorkflow(BaseModel):
name: str
actions: List[Union["ModbusWorkflow", WorkflowAction]]
""" 前后端Json解析用 """
class AddressFunctionJson(TypedDict):
func_name: str
node_name: str
mode: str
value: Any
class InitFunctionJson(AddressFunctionJson):
pass
class StartFunctionJson(AddressFunctionJson):
pass
class StopFunctionJson(AddressFunctionJson):
pass
class CleanupFunctionJson(AddressFunctionJson):
pass
class ActionJson(TypedDict):
address_function_to_create: list[AddressFunctionJson]
create_init_function: Optional[InitFunctionJson]
create_start_function: Optional[StartFunctionJson]
create_stop_function: Optional[StopFunctionJson]
create_cleanup_function: Optional[CleanupFunctionJson]
class WorkflowCreateJson(TypedDict):
name: str
action: list[Union[ActionJson, 'WorkflowCreateJson'] | str]
class ExecuteProcedureJson(TypedDict):
register_node_list_from_csv_path: Optional[dict[str, Any]]
create_flow: list[WorkflowCreateJson]
execute_flow: list[str]
class BaseClient(UniversalDriver):
client: Optional[Union[ModbusSerialClient, ModbusTcpClient]] = None
_node_registry: Dict[str, ModbusNodeBase] = {}
DEFAULT_ADDRESS_PATH = ""
def __init__(self):
super().__init__()
# self.register_node_list_from_csv_path()
def _set_client(self, client: Optional[Union[ModbusSerialClient, ModbusTcpClient]]) -> None:
if client is None:
raise ValueError('client is not valid')
# if not isinstance(client, TCPClient ) or not isinstance(client, RTUClient):
# raise ValueError('client is not valid')
self.client = client
def _connect(self) -> None:
logger.info('try to connect client...')
if self.client:
if self.client.connect():
logger.info('client connected!')
else:
logger.error('client connect failed')
else:
raise ValueError('client is not initialized')
@classmethod
def load_csv(cls, file_path: str):
df = pd.read_csv(file_path)
df = df.drop_duplicates(subset='Name', keep='first') # FIXME: 重复的数据应该报错
data_dict = df.set_index('Name').to_dict(orient='index')
nodes = []
for k, v in data_dict.items():
deviceType = v.get('DeviceType', None)
addr = v.get('Address', 0)
dataType = v.get('DataType', 'BOOL')
if not deviceType or not addr:
continue
if deviceType == DeviceType.COIL.value:
byteAddr = int(addr / 10)
bitAddr = addr % 10
addr = byteAddr * 8 + bitAddr
if dataType == 'BOOL':
# noinspection PyTypeChecker
dataType = 'INT16'
# noinspection PyTypeChecker
if pd.isna(dataType):
print(v, "没有注册成功!")
continue
dataType: DataType = DataType[dataType]
nodes.append(ModbusNode(name=k, device_type=DeviceType(deviceType), address=addr, data_type=dataType))
return nodes
def use_node(self, name: str) -> ModbusNodeBase:
if name not in self._node_registry:
raise ValueError(f'node {name} is not registered')
return self._node_registry[name]
def get_node_registry(self) -> Dict[str, ModbusNodeBase]:
return self._node_registry
def register_node_list_from_csv_path(self, path: str = None) -> "BaseClient":
if path is None:
path = self.DEFAULT_ADDRESS_PATH
nodes = self.load_csv(path)
return self.register_node_list(nodes)
def register_node_list(self, node_list: List[ModbusNode]) -> "BaseClient":
if not self.client:
raise ValueError('client is not connected')
if not node_list or len(node_list) == 0:
logger.warning('node list is empty')
return self
logger.info(f'start to register {len(node_list)} nodes...')
for node in node_list:
if node is None:
continue
if node.name in self._node_registry:
logger.info(f'node {node.name} already exists')
exist = self._node_registry[node.name]
if exist.type != node.device_type:
raise ValueError(f'node {node.name} type {node.device_type} is diplicated with {exist.type}')
if exist.address != node.address:
raise ValueError(f'node {node.name} address is duplicated with {exist.address}')
continue
if not isinstance(node.device_type, DeviceType):
raise ValueError(f'node {node.name} type is not valid')
if node.device_type == DeviceType.HOLD_REGISTER:
self._node_registry[node.name] = HoldRegister(self.client, node.name, node.address, node.data_type)
elif node.device_type == DeviceType.COIL:
self._node_registry[node.name] = Coil(self.client, node.name, node.address, node.data_type)
elif node.device_type == DeviceType.INPUT_REGISTER:
self._node_registry[node.name] = InputRegister(self.client, node.name, node.address, node.data_type)
elif node.device_type == DeviceType.DISCRETE_INPUTS:
self._node_registry[node.name] = DiscreteInputs(self.client, node.name, node.address, node.data_type)
else:
raise ValueError(f'node {node.name} type {node.device_type} is not valid')
logger.info('register nodes done.')
return self
def run_plc_workflow(self, workflow: PLCWorkflow) -> None:
if not self.client:
raise ValueError('client is not connected')
logger.info(f'start to run workflow {workflow.name}...')
for action in workflow.actions:
if isinstance(action, PLCWorkflow):
self.run_plc_workflow(action)
elif isinstance(action, Callable):
action(self.use_node)
else:
raise ValueError(f'invalid action {action}')
def call_lifecycle_fn(
self,
workflow: ModbusWorkflow,
fn: Optional[Callable[[Callable], bool]],
) -> bool:
if not fn:
raise ValueError('fn is not valid in call_lifecycle_fn')
try:
return fn(self.use_node)
except Exception as e:
traceback.print_exc()
logger.error(f'execute {workflow.name} lifecycle failed, err: {e}')
return False
def run_modbus_workflow(self, workflow: ModbusWorkflow) -> bool:
if not self.client:
raise ValueError('client is not connected')
logger.info(f'start to run workflow {workflow.name}...')
for action in workflow.actions:
if isinstance(action, ModbusWorkflow):
if self.run_modbus_workflow(action):
logger.info(f"{action.name} workflow done.")
continue
else:
logger.error(f"{action.name} workflow failed")
return False
elif isinstance(action, WorkflowAction):
init = action.init
start = action.start
stop = action.stop
cleanup = action.cleanup
if not init and not start and not stop:
raise ValueError(f'invalid action {action}')
is_err = False
try:
if init and not self.call_lifecycle_fn(workflow, init):
raise ValueError(f"{workflow.name} init action failed")
if not self.call_lifecycle_fn(workflow, start):
raise ValueError(f"{workflow.name} start action failed")
if not self.call_lifecycle_fn(workflow, stop):
raise ValueError(f"{workflow.name} stop action failed")
logger.info(f"{workflow.name} action done.")
except Exception as e:
is_err = True
traceback.print_exc()
logger.error(f"{workflow.name} action failed, err: {e}")
finally:
logger.info(f"{workflow.name} try to run cleanup")
if cleanup:
self.call_lifecycle_fn(workflow, cleanup)
else:
logger.info(f"{workflow.name} cleanup is not defined")
if is_err:
return False
return True
else:
raise ValueError(f'invalid action type {type(action)}')
return True
function_name: dict[str, Callable[[Callable[[str], ModbusNodeBase]], bool]] = {}
@classmethod
def pack_func(cls, func, value="UNDEFINED"):
def execute_pack_func(use_node: Callable[[str], ModbusNodeBase]):
if value == "UNDEFINED":
func()
else:
func(use_node, value)
return execute_pack_func
def create_address_function(self, func_name: str = None, node_name: str = None, mode: str = None, value: Any = None, data_type: Optional[DataType] = None, word_order: WorderOrder = None, slave: Optional[int] = None) -> Callable[[Callable[[str], ModbusNodeBase]], bool]:
def execute_address_function(use_node: Callable[[str], ModbusNodeBase]) -> Union[bool, Tuple[Union[int, float, str, list[bool], list[int], list[float]], bool]]:
param = {"value": value}
if data_type is not None:
param["data_type"] = data_type
if word_order is not None:
param["word_order"] = word_order
if slave is not None:
param["slave"] = slave
target_node = use_node(node_name)
print("执行", node_name, type(target_node).__name__, target_node.address, mode, value)
if mode == 'read':
return use_node(node_name).read(**param)
elif mode == 'write':
return not use_node(node_name).write(**param)
return False
if func_name is None:
func_name = node_name + '_' + mode + '_' + str(value)
print("创建 address function", mode, func_name)
self.function_name[func_name] = execute_address_function
return execute_address_function
def create_init_function(self, func_name: str = None, node_name: str = None, mode: str = None, value: Any = None, data_type: Optional[DataType] = None, word_order: WorderOrder = None, slave: Optional[int] = None):
return self.create_address_function(func_name, node_name, mode, value, data_type, word_order, slave)
def create_stop_function(self, func_name: str = None, node_name: str = None, mode: str = None, value: Any = None, data_type: Optional[DataType] = None, word_order: WorderOrder = None, slave: Optional[int] = None):
return self.create_address_function(func_name, node_name, mode, value, data_type, word_order, slave)
def create_cleanup_function(self, func_name: str = None, node_name: str = None, mode: str = None, value: Any = None, data_type: Optional[DataType] = None, word_order: WorderOrder = None, slave: Optional[int] = None):
return self.create_address_function(func_name, node_name, mode, value, data_type, word_order, slave)
def create_start_function(self, func_name: str, write_functions: list[str], condition_functions: list[str], stop_condition_expression: str):
def execute_start_function(use_node: Callable[[str], ModbusNodeBase]) -> bool:
for write_function in write_functions:
self.function_name[write_function](use_node)
while True:
next_loop = False
condition_source = {}
for condition_function in condition_functions:
read_res, read_err = self.function_name[condition_function](use_node)
if read_err:
next_loop = True
break
condition_source[condition_function] = read_res
if not next_loop:
if stop_condition_expression:
condition_source["__RESULT"] = None
exec(f"__RESULT = {stop_condition_expression}", {}, condition_source) # todo: safety check
res = condition_source["__RESULT"]
print("取得计算结果;", res)
if res:
break
else:
time.sleep(0.3)
return True
return execute_start_function
def create_action_from_json(self, data: ActionJson):
for i in data["address_function_to_create"]:
self.create_address_function(**i)
init = None
start = None
stop = None
cleanup = None
if data["create_init_function"]:
print("创建 init function")
init = self.create_init_function(**data["create_init_function"])
if data["create_start_function"]:
print("创建 start function")
start = self.create_start_function(**data["create_start_function"])
if data["create_stop_function"]:
print("创建 stop function")
stop = self.create_stop_function(**data["create_stop_function"])
if data["create_cleanup_function"]:
print("创建 cleanup function")
cleanup = self.create_cleanup_function(**data["create_cleanup_function"])
return WorkflowAction(init=init, start=start, stop=stop, cleanup=cleanup)
workflow_name = {}
def create_workflow_from_json(self, data: list[WorkflowCreateJson]):
for ind, flow in enumerate(data):
print("正在创建 workflow", ind, flow["name"])
actions = []
for i in flow["action"]:
if isinstance(i, str):
print("沿用 已有workflow 作为action", i)
action = self.workflow_name[i]
else:
print("创建 action")
action = self.create_action_from_json(i)
actions.append(action)
flow_instance = ModbusWorkflow(name=flow["name"], actions=actions)
print("创建完成 workflow", flow["name"])
self.workflow_name[flow["name"]] = flow_instance
def execute_workflow_from_json(self, data: list[str]):
for i in data:
print("正在执行 workflow", i)
self.run_modbus_workflow(self.workflow_name[i])
def execute_procedure_from_json(self, data: ExecuteProcedureJson):
if data["register_node_list_from_csv_path"]:
print("注册节点 csv", data["register_node_list_from_csv_path"])
self.register_node_list_from_csv_path(**data["register_node_list_from_csv_path"])
print("创建工作流")
self.create_workflow_from_json(data["create_flow"])
print("执行工作流")
self.execute_workflow_from_json(data["execute_flow"])
class TCPClient(BaseClient):
def __init__(self, addr: str, port: int):
super().__init__()
self._set_client(ModbusTcpClient(host=addr, port=port))
# self._connect()
class RTUClient(BaseClient):
def __init__(self, port: str, baudrate: int, timeout: int):
super().__init__()
self._set_client(ModbusSerialClient(framer=FramerType.RTU, port=port, baudrate=baudrate, timeout=timeout))
self._connect()
if __name__ == '__main__':
""" 代码写法① """
def idel_init(use_node: Callable[[str], ModbusNodeBase]) -> bool:
# 修改速度
use_node('M01_idlepos_velocity_rw').write(20.0)
# 修改位置
# use_node('M01_idlepos_position_rw').write(35.22)
return True
def idel_position(use_node: Callable[[str], ModbusNodeBase]) -> bool:
use_node('M01_idlepos_coil_w').write(True)
while True:
pos_idel, idel_err = use_node('M01_idlepos_coil_r').read(1)
pos_stop, stop_err = use_node('M01_manual_stop_coil_r').read(1)
time.sleep(0.5)
if not idel_err and not stop_err and pos_idel[0] and pos_stop[0]:
break
return True
def idel_stop(use_node: Callable[[str], ModbusNodeBase]) -> bool:
use_node('M01_idlepos_coil_w').write(False)
return True
move_idel = ModbusWorkflow(name="测试待机位置", actions=[WorkflowAction(
init=idel_init,
start=idel_position,
stop=idel_stop,
)])
def pipetter_init(use_node: Callable[[str], ModbusNodeBase]) -> bool:
# 修改速度
# use_node('M01_idlepos_velocity_rw').write(10.0)
# 修改位置
# use_node('M01_idlepos_position_rw').write(35.22)
return True
def pipetter_position(use_node: Callable[[str], ModbusNodeBase]) -> bool:
use_node('M01_pipette0_coil_w').write(True)
while True:
pos_idel, isError = use_node('M01_pipette0_coil_r').read(1)
pos_stop, isError = use_node('M01_manual_stop_coil_r').read(1)
time.sleep(0.5)
if pos_idel[0] and pos_stop[0]:
break
return True
def pipetter_stop(use_node: Callable[[str], ModbusNodeBase]) -> bool:
use_node('M01_pipette0_coil_w').write(False)
return True
move_pipetter = ModbusWorkflow(name="测试待机位置", actions=[WorkflowAction(
init=None,
start=pipetter_position,
stop=pipetter_stop,
)])
workflow_test_2 = ModbusWorkflow(name="测试水平移动并停止", actions=[
move_idel,
move_pipetter,
])
# .run_modbus_workflow(move_2_left_workflow)
""" 代码写法② """
# if False:
# modbus_tcp_client_test2 = TCPClient('192.168.3.2', 502)
# modbus_tcp_client_test2.register_node_list_from_csv_path('M01.csv')
# init = modbus_tcp_client_test2.create_init_function('idel_init', 'M01_idlepos_velocity_rw', 'write', 20.0)
#
# modbus_tcp_client_test2.create_address_function('pos_tip', 'M01_idlepos_coil_w', 'write', True)
# modbus_tcp_client_test2.create_address_function('pos_tip_read', 'M01_idlepos_coil_r', 'read', 1)
# modbus_tcp_client_test2.create_address_function('manual_stop', 'M01_manual_stop_coil_r', 'read', 1)
# start = modbus_tcp_client_test2.create_start_function(
# 'idel_position',
# write_functions=[
# 'pos_tip'
# ],
# condition_functions=[
# 'pos_tip_read',
# 'manual_stop'
# ],
# stop_condition_expression='pos_tip_read[0] and manual_stop[0]'
# )
# stop = modbus_tcp_client_test2.create_stop_function('idel_stop', 'M01_idlepos_coil_w', 'write', False)
#
# move_idel = ModbusWorkflow(name="归位", actions=[WorkflowAction(
# init=init,
# start=start,
# stop=stop,
# )])
#
# modbus_tcp_client_test2.create_address_function('pipetter_position', 'M01_pipette0_coil_w', 'write', True)
# modbus_tcp_client_test2.create_address_function('pipetter_position_read', 'M01_pipette0_coil_r', 'read', 1)
# modbus_tcp_client_test2.create_address_function('pipetter_stop_read', 'M01_manual_stop_coil_r', 'read', 1)
# pipetter_position = modbus_tcp_client_test2.create_start_function(
# 'pipetter_start',
# write_functions=[
# 'pipetter_position'
# ],
# condition_functions=[
# 'pipetter_position_read',
# 'pipetter_stop_read'
# ],
# stop_condition_expression='pipetter_position[0] and pipetter_stop_read[0]'
# )
# pipetter_stop = modbus_tcp_client_test2.create_stop_function('pipetter_stop', 'M01_pipette0_coil_w', 'write', False)
#
# move_pipetter = ModbusWorkflow(name="测试待机位置", actions=[WorkflowAction(
# init=None,
# start=pipetter_position,
# stop=pipetter_stop,
# )])
#
# workflow_test_2 = ModbusWorkflow(name="测试水平移动并停止", actions=[
# move_idel,
# move_pipetter,
# ])
#
# workflow_test_2.run_modbus_workflow()
""" 代码写法③ """
with open('example_json.json', 'r', encoding='utf-8') as f:
example_json = json.load(f)
modbus_tcp_client_test2 = TCPClient('127.0.0.1', 5021)
modbus_tcp_client_test2.execute_procedure_from_json(example_json)
# .run_modbus_workflow(move_2_left_workflow)
# init_client(FramerType.SOCKET, "", '192.168.3.2', 502)