mirror of
https://github.com/dptech-corp/Uni-Lab-OS.git
synced 2026-02-05 05:45:10 +00:00
Compare commits
42 Commits
workflow_u
...
acf5fdebf8
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
acf5fdebf8 | ||
|
|
7f7b1c13c0 | ||
|
|
75f09034ff | ||
|
|
549a50220b | ||
|
|
4189a2cfbe | ||
|
|
48895a9bb1 | ||
|
|
891f126ed6 | ||
|
|
4d3475a849 | ||
|
|
b475db66df | ||
|
|
a625a86e3e | ||
|
|
37e0f1037c | ||
|
|
a242253145 | ||
|
|
448e0074b7 | ||
|
|
304827fc8d | ||
|
|
872b3d781f | ||
|
|
813400f2b4 | ||
|
|
b6dfe2b944 | ||
|
|
8807865649 | ||
|
|
5fc7eb7586 | ||
|
|
9bd72b48e1 | ||
|
|
42b78ab4c1 | ||
|
|
9645609a05 | ||
|
|
a2a827d7ac | ||
|
|
bb3ca645a4 | ||
|
|
37ee43d19a | ||
|
|
bc30f23e34 | ||
|
|
166d84afe1 | ||
|
|
1b43c53015 | ||
|
|
d4415f5a35 | ||
|
|
0260cbbedb | ||
|
|
7c440d10ab | ||
|
|
c85c49817d | ||
|
|
c70eafa5f0 | ||
|
|
b64466d443 | ||
|
|
ef3f24ed48 | ||
|
|
2a8e8d014b | ||
|
|
e0da1c7217 | ||
|
|
51d3e61723 | ||
|
|
6b5765bbf3 | ||
|
|
eb1f3fbe1c | ||
|
|
fb93b1cd94 | ||
|
|
9aeffebde1 |
@@ -1,6 +1,6 @@
|
||||
package:
|
||||
name: unilabos
|
||||
version: 0.10.12
|
||||
version: 0.10.11
|
||||
|
||||
source:
|
||||
path: ../unilabos
|
||||
|
||||
@@ -67,6 +67,14 @@ class WSConfig:
|
||||
max_reconnect_attempts = 999 # 最大重连次数
|
||||
ping_interval = 30 # ping间隔(秒)
|
||||
|
||||
# OSS上传配置
|
||||
class OSSUploadConfig:
|
||||
api_host = "" # API主机地址
|
||||
authorization = "" # 授权信息
|
||||
init_endpoint = "" # 初始化端点
|
||||
complete_endpoint = "" # 完成端点
|
||||
max_retries = 3 # 最大重试次数
|
||||
|
||||
# HTTP配置
|
||||
class HTTPConfig:
|
||||
remote_addr = "https://uni-lab.bohrium.com/api/v1" # 远程服务器地址
|
||||
@@ -286,7 +294,19 @@ HTTP 客户端配置用于与云端服务通信:
|
||||
- UAT 环境:`https://uni-lab.uat.bohrium.com/api/v1`
|
||||
- 本地环境:`http://127.0.0.1:48197/api/v1`
|
||||
|
||||
### 4. ROSConfig - ROS 配置
|
||||
### 4. OSSUploadConfig - OSS 上传配置
|
||||
|
||||
对象存储服务配置,用于文件上传功能:
|
||||
|
||||
| 参数 | 类型 | 默认值 | 说明 |
|
||||
| ------------------- | ---- | ------ | -------------------- |
|
||||
| `api_host` | str | `""` | OSS API 主机地址 |
|
||||
| `authorization` | str | `""` | 授权认证信息 |
|
||||
| `init_endpoint` | str | `""` | 上传初始化端点 |
|
||||
| `complete_endpoint` | str | `""` | 上传完成端点 |
|
||||
| `max_retries` | int | `3` | 上传失败最大重试次数 |
|
||||
|
||||
### 5. ROSConfig - ROS 配置
|
||||
|
||||
配置 ROS 消息转换器需要加载的模块:
|
||||
|
||||
|
||||
@@ -4,8 +4,7 @@
|
||||
|
||||
## 概述
|
||||
|
||||
注册表(Registry)是 Uni-Lab 的设备配置系统,采用 YAML 格式定义设备的:
|
||||
|
||||
注册表(Registry)是Uni-Lab的设备配置系统,采用YAML格式定义设备的:
|
||||
- 可用动作(Actions)
|
||||
- 状态类型(Status Types)
|
||||
- 初始化参数(Init Parameters)
|
||||
@@ -33,19 +32,19 @@
|
||||
|
||||
### 核心字段说明
|
||||
|
||||
| 字段名 | 类型 | 需要手写 | 说明 |
|
||||
| ----------------- | ------ | -------- | --------------------------------- |
|
||||
| 设备标识符 | string | 是 | 设备的唯一名字,如 `mock_chiller` |
|
||||
| class | object | 部分 | 设备的核心信息,必须配置 |
|
||||
| description | string | 否 | 设备描述,系统默认给空字符串 |
|
||||
| handles | array | 否 | 连接关系,默认为空 |
|
||||
| icon | string | 否 | 图标路径,默认为空 |
|
||||
| init_param_schema | object | 否 | 初始化参数,系统自动分析生成 |
|
||||
| version | string | 否 | 版本号,默认 "1.0.0" |
|
||||
| category | array | 否 | 设备分类,默认使用文件名 |
|
||||
| config_info | array | 否 | 嵌套配置,默认为空 |
|
||||
| file_path | string | 否 | 文件路径,系统自动设置 |
|
||||
| registry_type | string | 否 | 注册表类型,自动设为 "device" |
|
||||
| 字段名 | 类型 | 需要手写 | 说明 |
|
||||
| ----------------- | ------ | -------- | ----------------------------------- |
|
||||
| 设备标识符 | string | 是 | 设备的唯一名字,如 `mock_chiller` |
|
||||
| class | object | 部分 | 设备的核心信息,必须配置 |
|
||||
| description | string | 否 | 设备描述,系统默认给空字符串 |
|
||||
| handles | array | 否 | 连接关系,默认为空 |
|
||||
| icon | string | 否 | 图标路径,默认为空 |
|
||||
| init_param_schema | object | 否 | 初始化参数,系统自动分析生成 |
|
||||
| version | string | 否 | 版本号,默认 "1.0.0" |
|
||||
| category | array | 否 | 设备分类,默认使用文件名 |
|
||||
| config_info | array | 否 | 嵌套配置,默认为空 |
|
||||
| file_path | string | 否 | 文件路径,系统自动设置 |
|
||||
| registry_type | string | 否 | 注册表类型,自动设为 "device" |
|
||||
|
||||
### class 字段详解
|
||||
|
||||
@@ -72,11 +71,11 @@ my_device:
|
||||
# 动作配置(详见后文)
|
||||
action_name:
|
||||
type: UniLabJsonCommand
|
||||
goal: { ... }
|
||||
result: { ... }
|
||||
goal: {...}
|
||||
result: {...}
|
||||
|
||||
description: '设备描述'
|
||||
version: '1.0.0'
|
||||
description: "设备描述"
|
||||
version: "1.0.0"
|
||||
category:
|
||||
- device_category
|
||||
handles: []
|
||||
@@ -102,22 +101,21 @@ my_device:
|
||||
|
||||
## 创建注册表的方式
|
||||
|
||||
### 方式 1: 使用注册表编辑器(推荐)
|
||||
### 方式1: 使用注册表编辑器(推荐)
|
||||
|
||||
适合大多数场景,快速高效。
|
||||
|
||||
**步骤**:
|
||||
|
||||
1. 启动 Uni-Lab
|
||||
2. 访问 Web 界面的"注册表编辑器"
|
||||
3. 上传您的 Python 设备驱动文件
|
||||
1. 启动Uni-Lab
|
||||
2. 访问Web界面的"注册表编辑器"
|
||||
3. 上传您的Python设备驱动文件
|
||||
4. 点击"分析文件"
|
||||
5. 填写描述和图标
|
||||
6. 点击"生成注册表"
|
||||
7. 复制生成的 YAML 内容
|
||||
7. 复制生成的YAML内容
|
||||
8. 保存到 `unilabos/registry/devices/your_device.yaml`
|
||||
|
||||
### 方式 2: 使用--complete_registry 参数(开发调试)
|
||||
### 方式2: 使用--complete_registry参数(开发调试)
|
||||
|
||||
适合开发阶段,自动补全配置。
|
||||
|
||||
@@ -127,8 +125,7 @@ unilab -g dev.json --complete_registry --registry_path ./my_registry
|
||||
```
|
||||
|
||||
系统会:
|
||||
|
||||
1. 扫描 Python 类
|
||||
1. 扫描Python类
|
||||
2. 分析方法签名和类型
|
||||
3. 自动生成缺失的字段
|
||||
4. 保存到注册表文件
|
||||
@@ -140,7 +137,7 @@ unilab -g dev.json --complete_registry --registry_path ./my_registry
|
||||
启动系统时用 complete_registry=True 参数,让系统自动补全
|
||||
```
|
||||
|
||||
### 方式 3: 手动编写(高级)
|
||||
### 方式3: 手动编写(高级)
|
||||
|
||||
适合需要精细控制或特殊需求的场景。
|
||||
|
||||
@@ -189,7 +186,6 @@ my_device:
|
||||
| ROS 动作类型 | 标准 ROS 动作 | goal_default 和 schema |
|
||||
|
||||
**常用的 ROS 动作类型**:
|
||||
|
||||
- `SendCmd`:发送简单命令
|
||||
- `NavigateThroughPoses`:导航动作
|
||||
- `SingleJointPosition`:单关节位置控制
|
||||
@@ -255,11 +251,11 @@ heat_chill_start:
|
||||
|
||||
## 特殊类型的自动识别
|
||||
|
||||
### ResourceSlot 和 DeviceSlot 识别
|
||||
### ResourceSlot和DeviceSlot识别
|
||||
|
||||
当您在驱动代码中使用这些特殊类型时,系统会自动识别并生成相应的前端选择器。
|
||||
|
||||
**Python 驱动代码示例**:
|
||||
**Python驱动代码示例**:
|
||||
|
||||
```python
|
||||
from unilabos.registry.placeholder_type import ResourceSlot, DeviceSlot
|
||||
@@ -290,24 +286,24 @@ my_device:
|
||||
device: device
|
||||
devices: devices
|
||||
placeholder_keys:
|
||||
resource: unilabos_resources # 自动添加!
|
||||
resources: unilabos_resources # 自动添加!
|
||||
device: unilabos_devices # 自动添加!
|
||||
devices: unilabos_devices # 自动添加!
|
||||
resource: unilabos_resources # 自动添加!
|
||||
resources: unilabos_resources # 自动添加!
|
||||
device: unilabos_devices # 自动添加!
|
||||
devices: unilabos_devices # 自动添加!
|
||||
result:
|
||||
success: success
|
||||
```
|
||||
|
||||
### 识别规则
|
||||
|
||||
| Python 类型 | placeholder_keys 值 | 前端效果 |
|
||||
| -------------------- | -------------------- | -------------- |
|
||||
| `ResourceSlot` | `unilabos_resources` | 单选资源下拉框 |
|
||||
| Python类型 | placeholder_keys值 | 前端效果 |
|
||||
|-----------|-------------------|---------|
|
||||
| `ResourceSlot` | `unilabos_resources` | 单选资源下拉框 |
|
||||
| `List[ResourceSlot]` | `unilabos_resources` | 多选资源下拉框 |
|
||||
| `DeviceSlot` | `unilabos_devices` | 单选设备下拉框 |
|
||||
| `List[DeviceSlot]` | `unilabos_devices` | 多选设备下拉框 |
|
||||
| `DeviceSlot` | `unilabos_devices` | 单选设备下拉框 |
|
||||
| `List[DeviceSlot]` | `unilabos_devices` | 多选设备下拉框 |
|
||||
|
||||
### 前端 UI 效果
|
||||
### 前端UI效果
|
||||
|
||||
#### 单选资源
|
||||
|
||||
@@ -317,7 +313,6 @@ placeholder_keys:
|
||||
```
|
||||
|
||||
**前端渲染**:
|
||||
|
||||
```
|
||||
Source: [下拉选择框 ▼]
|
||||
├── plate_1 (96孔板)
|
||||
@@ -334,7 +329,6 @@ placeholder_keys:
|
||||
```
|
||||
|
||||
**前端渲染**:
|
||||
|
||||
```
|
||||
Targets: [多选下拉框 ▼]
|
||||
☑ plate_1 (96孔板)
|
||||
@@ -351,7 +345,6 @@ placeholder_keys:
|
||||
```
|
||||
|
||||
**前端渲染**:
|
||||
|
||||
```
|
||||
Pump: [下拉选择框 ▼]
|
||||
├── pump_1 (注射泵A)
|
||||
@@ -367,7 +360,6 @@ placeholder_keys:
|
||||
```
|
||||
|
||||
**前端渲染**:
|
||||
|
||||
```
|
||||
Sync Devices: [多选下拉框 ▼]
|
||||
☑ heater_1 (加热器A)
|
||||
@@ -375,11 +367,11 @@ Sync Devices: [多选下拉框 ▼]
|
||||
☐ pump_1 (注射泵)
|
||||
```
|
||||
|
||||
### 手动配置 placeholder_keys
|
||||
### 手动配置placeholder_keys
|
||||
|
||||
如果需要手动添加或覆盖自动生成的 placeholder_keys:
|
||||
如果需要手动添加或覆盖自动生成的placeholder_keys:
|
||||
|
||||
#### 场景 1: 非标准参数名
|
||||
#### 场景1: 非标准参数名
|
||||
|
||||
```yaml
|
||||
action_value_mappings:
|
||||
@@ -392,7 +384,7 @@ action_value_mappings:
|
||||
my_device_param: unilabos_devices
|
||||
```
|
||||
|
||||
#### 场景 2: 混合类型
|
||||
#### 场景2: 混合类型
|
||||
|
||||
```python
|
||||
def mixed_params(
|
||||
@@ -406,33 +398,32 @@ def mixed_params(
|
||||
|
||||
```yaml
|
||||
placeholder_keys:
|
||||
resource: unilabos_resources # 资源选择
|
||||
device: unilabos_devices # 设备选择
|
||||
resource: unilabos_resources # 资源选择
|
||||
device: unilabos_devices # 设备选择
|
||||
# normal_param不需要placeholder_keys
|
||||
```
|
||||
|
||||
#### 场景 3: 自定义选择器
|
||||
#### 场景3: 自定义选择器
|
||||
|
||||
```yaml
|
||||
placeholder_keys:
|
||||
special_param: custom_selector # 使用自定义选择器
|
||||
special_param: custom_selector # 使用自定义选择器
|
||||
```
|
||||
|
||||
## 系统自动生成的字段
|
||||
|
||||
### status_types
|
||||
|
||||
系统会扫描你的 Python 类,从状态方法(property 或 get\_方法)自动生成这部分:
|
||||
系统会扫描你的 Python 类,从状态方法(property或get_方法)自动生成这部分:
|
||||
|
||||
```yaml
|
||||
status_types:
|
||||
current_temperature: float # 从 get_current_temperature() 或 @property current_temperature
|
||||
is_heating: bool # 从 get_is_heating() 或 @property is_heating
|
||||
status: str # 从 get_status() 或 @property status
|
||||
is_heating: bool # 从 get_is_heating() 或 @property is_heating
|
||||
status: str # 从 get_status() 或 @property status
|
||||
```
|
||||
|
||||
**注意事项**:
|
||||
|
||||
- 系统会查找所有 `get_` 开头的方法和 `@property` 装饰的属性
|
||||
- 类型会自动转成相应的类型(如 `str`、`float`、`bool`)
|
||||
- 如果类型是 `Any`、`None` 或未知的,默认使用 `String`
|
||||
@@ -468,21 +459,20 @@ init_param_schema:
|
||||
```
|
||||
|
||||
**生成规则**:
|
||||
|
||||
- `config` 部分:分析 `__init__` 方法的参数、类型和默认值
|
||||
- `data` 部分:根据 `status_types` 生成前端显示用的类型定义
|
||||
|
||||
### 其他自动填充的字段
|
||||
|
||||
```yaml
|
||||
version: '1.0.0' # 默认版本
|
||||
category: ['文件名'] # 使用 yaml 文件名作为类别
|
||||
description: '' # 默认为空
|
||||
icon: '' # 默认为空
|
||||
handles: [] # 默认空数组
|
||||
config_info: [] # 默认空数组
|
||||
version: '1.0.0' # 默认版本
|
||||
category: ['文件名'] # 使用 yaml 文件名作为类别
|
||||
description: '' # 默认为空
|
||||
icon: '' # 默认为空
|
||||
handles: [] # 默认空数组
|
||||
config_info: [] # 默认空数组
|
||||
file_path: '/path/to/file' # 系统自动填写
|
||||
registry_type: 'device' # 自动设为设备类型
|
||||
registry_type: 'device' # 自动设为设备类型
|
||||
```
|
||||
|
||||
### handles 字段
|
||||
@@ -520,7 +510,7 @@ config_info: # 嵌套配置,用于包含子设备
|
||||
|
||||
## 完整示例
|
||||
|
||||
### Python 驱动代码
|
||||
### Python驱动代码
|
||||
|
||||
```python
|
||||
# unilabos/devices/my_lab/liquid_handler.py
|
||||
@@ -530,22 +520,22 @@ from typing import List, Dict, Any, Optional
|
||||
|
||||
class AdvancedLiquidHandler:
|
||||
"""高级液体处理工作站"""
|
||||
|
||||
|
||||
def __init__(self, config: Dict[str, Any]):
|
||||
self.simulation = config.get('simulation', False)
|
||||
self._status = "idle"
|
||||
self._temperature = 25.0
|
||||
|
||||
|
||||
@property
|
||||
def status(self) -> str:
|
||||
"""设备状态"""
|
||||
return self._status
|
||||
|
||||
|
||||
@property
|
||||
def temperature(self) -> float:
|
||||
"""当前温度"""
|
||||
return self._temperature
|
||||
|
||||
|
||||
def transfer(
|
||||
self,
|
||||
source: ResourceSlot,
|
||||
@@ -555,7 +545,7 @@ class AdvancedLiquidHandler:
|
||||
) -> Dict[str, Any]:
|
||||
"""转移液体"""
|
||||
return {"success": True}
|
||||
|
||||
|
||||
def multi_transfer(
|
||||
self,
|
||||
source: ResourceSlot,
|
||||
@@ -564,7 +554,7 @@ class AdvancedLiquidHandler:
|
||||
) -> Dict[str, Any]:
|
||||
"""多目标转移"""
|
||||
return {"success": True}
|
||||
|
||||
|
||||
def coordinate_with_heater(
|
||||
self,
|
||||
plate: ResourceSlot,
|
||||
@@ -584,12 +574,12 @@ advanced_liquid_handler:
|
||||
class:
|
||||
module: unilabos.devices.my_lab.liquid_handler:AdvancedLiquidHandler
|
||||
type: python
|
||||
|
||||
|
||||
# 自动提取的状态类型
|
||||
status_types:
|
||||
status: str
|
||||
temperature: float
|
||||
|
||||
|
||||
# 自动生成的初始化参数
|
||||
init_param_schema:
|
||||
config:
|
||||
@@ -607,7 +597,7 @@ advanced_liquid_handler:
|
||||
required:
|
||||
- status
|
||||
type: object
|
||||
|
||||
|
||||
# 动作映射
|
||||
action_value_mappings:
|
||||
transfer:
|
||||
@@ -623,28 +613,28 @@ advanced_liquid_handler:
|
||||
volume: 0.0
|
||||
tip: null
|
||||
placeholder_keys:
|
||||
source: unilabos_resources # 自动添加
|
||||
target: unilabos_resources # 自动添加
|
||||
tip: unilabos_resources # 自动添加
|
||||
source: unilabos_resources # 自动添加
|
||||
target: unilabos_resources # 自动添加
|
||||
tip: unilabos_resources # 自动添加
|
||||
result:
|
||||
success: success
|
||||
schema:
|
||||
description: '转移液体'
|
||||
description: "转移液体"
|
||||
properties:
|
||||
goal:
|
||||
properties:
|
||||
source:
|
||||
type: object
|
||||
description: '源容器'
|
||||
description: "源容器"
|
||||
target:
|
||||
type: object
|
||||
description: '目标容器'
|
||||
description: "目标容器"
|
||||
volume:
|
||||
type: number
|
||||
description: '体积(μL)'
|
||||
description: "体积(μL)"
|
||||
tip:
|
||||
type: object
|
||||
description: '枪头(可选)'
|
||||
description: "枪头(可选)"
|
||||
required:
|
||||
- source
|
||||
- target
|
||||
@@ -653,7 +643,7 @@ advanced_liquid_handler:
|
||||
required:
|
||||
- goal
|
||||
type: object
|
||||
|
||||
|
||||
multi_transfer:
|
||||
type: UniLabJsonCommand
|
||||
goal:
|
||||
@@ -661,11 +651,11 @@ advanced_liquid_handler:
|
||||
targets: targets
|
||||
volumes: volumes
|
||||
placeholder_keys:
|
||||
source: unilabos_resources # 单选
|
||||
targets: unilabos_resources # 多选
|
||||
source: unilabos_resources # 单选
|
||||
targets: unilabos_resources # 多选
|
||||
result:
|
||||
success: success
|
||||
|
||||
|
||||
coordinate_with_heater:
|
||||
type: UniLabJsonCommand
|
||||
goal:
|
||||
@@ -673,17 +663,17 @@ advanced_liquid_handler:
|
||||
heater: heater
|
||||
temperature: temperature
|
||||
placeholder_keys:
|
||||
plate: unilabos_resources # 资源选择
|
||||
heater: unilabos_devices # 设备选择
|
||||
plate: unilabos_resources # 资源选择
|
||||
heater: unilabos_devices # 设备选择
|
||||
result:
|
||||
success: success
|
||||
|
||||
description: '高级液体处理工作站,支持多目标转移和设备协同'
|
||||
version: '1.0.0'
|
||||
|
||||
description: "高级液体处理工作站,支持多目标转移和设备协同"
|
||||
version: "1.0.0"
|
||||
category:
|
||||
- liquid_handling
|
||||
handles: []
|
||||
icon: ''
|
||||
icon: ""
|
||||
```
|
||||
|
||||
### 另一个完整示例:温度控制器
|
||||
@@ -902,18 +892,17 @@ unilab -g dev.json --complete_registry
|
||||
cat unilabos/registry/devices/my_device.yaml
|
||||
```
|
||||
|
||||
### 2. 验证 placeholder_keys
|
||||
### 2. 验证placeholder_keys
|
||||
|
||||
确认:
|
||||
|
||||
- ResourceSlot 参数有 `unilabos_resources`
|
||||
- DeviceSlot 参数有 `unilabos_devices`
|
||||
- List 类型被正确识别
|
||||
- ResourceSlot参数有 `unilabos_resources`
|
||||
- DeviceSlot参数有 `unilabos_devices`
|
||||
- List类型被正确识别
|
||||
|
||||
### 3. 测试前端效果
|
||||
|
||||
1. 启动 Uni-Lab
|
||||
2. 访问 Web 界面
|
||||
1. 启动Uni-Lab
|
||||
2. 访问Web界面
|
||||
3. 选择设备
|
||||
4. 调用动作
|
||||
5. 检查是否显示正确的选择器
|
||||
@@ -927,21 +916,18 @@ python -c "from unilabos.devices.my_module.my_device import MyDevice"
|
||||
|
||||
## 常见问题
|
||||
|
||||
### Q1: placeholder_keys 没有自动生成
|
||||
### Q1: placeholder_keys没有自动生成
|
||||
|
||||
**检查**:
|
||||
|
||||
1. 是否使用了`--complete_registry`参数?
|
||||
2. 类型注解是否正确?
|
||||
|
||||
```python
|
||||
# ✓ 正确
|
||||
def method(self, resource: ResourceSlot):
|
||||
|
||||
|
||||
# ✗ 错误(缺少类型注解)
|
||||
def method(self, resource):
|
||||
```
|
||||
|
||||
3. 是否正确导入?
|
||||
```python
|
||||
from unilabos.registry.placeholder_type import ResourceSlot, DeviceSlot
|
||||
@@ -949,10 +935,9 @@ python -c "from unilabos.devices.my_module.my_device import MyDevice"
|
||||
|
||||
### Q2: 前端显示普通输入框而不是选择器
|
||||
|
||||
**原因**: placeholder_keys 未正确配置
|
||||
**原因**: placeholder_keys未正确配置
|
||||
|
||||
**解决**:
|
||||
|
||||
```yaml
|
||||
# 检查YAML中是否有
|
||||
placeholder_keys:
|
||||
@@ -962,7 +947,6 @@ placeholder_keys:
|
||||
### Q3: 多选不工作
|
||||
|
||||
**检查类型注解**:
|
||||
|
||||
```python
|
||||
# ✓ 正确 - 会生成多选
|
||||
def method(self, resources: List[ResourceSlot]):
|
||||
@@ -976,15 +960,13 @@ def method(self, resources: ResourceSlot):
|
||||
**说明**: 运行时会自动转换
|
||||
|
||||
前端传递:
|
||||
|
||||
```json
|
||||
{
|
||||
"resource": "plate_1" // 字符串ID
|
||||
"resource": "plate_1" // 字符串ID
|
||||
}
|
||||
```
|
||||
|
||||
运行时收到:
|
||||
|
||||
```python
|
||||
resource.id # "plate_1"
|
||||
resource.name # "96孔板"
|
||||
@@ -995,7 +977,6 @@ resource.type # "resource"
|
||||
### Q5: 设备加载不了
|
||||
|
||||
**检查**:
|
||||
|
||||
1. 确认 `class.module` 路径是否正确
|
||||
2. 确认 Python 驱动类能否正常导入
|
||||
3. 使用 yaml 验证器检查文件格式
|
||||
@@ -1004,7 +985,6 @@ resource.type # "resource"
|
||||
### Q6: 自动生成失败
|
||||
|
||||
**检查**:
|
||||
|
||||
1. 确认类继承了正确的基类
|
||||
2. 确保状态方法的返回类型注解清晰
|
||||
3. 检查类能否被动态导入
|
||||
@@ -1013,7 +993,6 @@ resource.type # "resource"
|
||||
### Q7: 前端显示问题
|
||||
|
||||
**解决步骤**:
|
||||
|
||||
1. 删除旧的 yaml 文件,用编辑器重新生成
|
||||
2. 清除浏览器缓存,重新加载页面
|
||||
3. 确认必需字段(如 `schema`)都存在
|
||||
@@ -1022,7 +1001,6 @@ resource.type # "resource"
|
||||
### Q8: 动作执行出错
|
||||
|
||||
**检查**:
|
||||
|
||||
1. 确认动作方法名符合规范(如 `execute_<action_name>`)
|
||||
2. 检查 `goal` 字段的参数映射是否正确
|
||||
3. 确认方法返回值格式符合 `result` 映射
|
||||
@@ -1063,7 +1041,7 @@ def transfer(self, r1: ResourceSlot, r2: ResourceSlot):
|
||||
pass
|
||||
```
|
||||
|
||||
3. **使用 Optional 表示可选参数**
|
||||
3. **使用Optional表示可选参数**
|
||||
|
||||
```python
|
||||
from typing import Optional
|
||||
@@ -1085,11 +1063,11 @@ def method(
|
||||
targets: List[ResourceSlot] # 目标容器列表
|
||||
) -> Dict[str, Any]:
|
||||
"""方法说明
|
||||
|
||||
|
||||
Args:
|
||||
source: 源容器,必须包含足够的液体
|
||||
targets: 目标容器列表,每个容器应该为空
|
||||
|
||||
|
||||
Returns:
|
||||
包含操作结果的字典
|
||||
"""
|
||||
@@ -1097,7 +1075,6 @@ def method(
|
||||
```
|
||||
|
||||
5. **方法命名规范**
|
||||
|
||||
- 状态方法使用 `@property` 装饰器或 `get_` 前缀
|
||||
- 动作方法使用动词开头
|
||||
- 保持命名清晰、一致
|
||||
@@ -1134,6 +1111,8 @@ def method(
|
||||
|
||||
- {doc}`add_device` - 设备驱动编写指南
|
||||
- {doc}`04_add_device_testing` - 设备测试指南
|
||||
- Python [typing 模块](https://docs.python.org/3/library/typing.html)
|
||||
- [YAML 语法](https://yaml.org/)
|
||||
- Python [typing模块](https://docs.python.org/3/library/typing.html)
|
||||
- [YAML语法](https://yaml.org/)
|
||||
- [JSON Schema](https://json-schema.org/)
|
||||
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# 实例:电池装配工站接入(PLC 控制)
|
||||
# 实例:电池装配工站接入(PLC控制)
|
||||
|
||||
> **文档类型**:实际应用案例
|
||||
> **适用场景**:使用 PLC 控制的电池装配工站接入
|
||||
@@ -50,6 +50,8 @@ class CoinCellAssemblyWorkstation(WorkstationBase):
|
||||
self.client = tcp.register_node_list(self.nodes)
|
||||
```
|
||||
|
||||
|
||||
|
||||
## 2. 编写驱动与寄存器读写
|
||||
|
||||
### 2.1 寄存器示例
|
||||
@@ -93,9 +95,9 @@ def start_and_read_metrics(self):
|
||||
|
||||
完成工站类与驱动后,需要生成(或更新)工站注册表供系统识别。
|
||||
|
||||
### 3.1 新增工站设备(或资源)首次生成注册表
|
||||
|
||||
首先通过以下命令启动 unilab。进入 unilab 系统状态检查页面
|
||||
### 3.1 新增工站设备(或资源)首次生成注册表
|
||||
首先通过以下命令启动unilab。进入unilab系统状态检查页面
|
||||
|
||||
```bash
|
||||
python unilabos\app\main.py -g celljson.json --ak <user的AK> --sk <user的SK>
|
||||
@@ -110,32 +112,35 @@ python unilabos\app\main.py -g celljson.json --ak <user的AK> --sk <user的SK>
|
||||

|
||||
|
||||
步骤说明:
|
||||
|
||||
1. 选择新增的工站`coin_cell_assembly.py`文件
|
||||
2. 点击分析按钮,分析`coin_cell_assembly.py`文件
|
||||
3. 选择`coin_cell_assembly.py`文件中继承`WorkstationBase`类
|
||||
4. 填写新增的工站.py 文件与`unilabos`目录的距离。例如,新增的工站文件`coin_cell_assembly.py`路径为`unilabos\devices\workstation\coin_cell_assembly\coin_cell_assembly.py`,则此处填写`unilabos.devices.workstation.coin_cell_assembly`。
|
||||
4. 填写新增的工站.py文件与`unilabos`目录的距离。例如,新增的工站文件`coin_cell_assembly.py`路径为`unilabos\devices\workstation\coin_cell_assembly\coin_cell_assembly.py`,则此处填写`unilabos.devices.workstation.coin_cell_assembly`。
|
||||
5. 此处填写新定义工站的类的名字(名称可以自拟)
|
||||
6. 填写新的工站注册表备注信息
|
||||
7. 生成注册表
|
||||
|
||||
以上操作步骤完成,则会生成的新的注册表 YAML 文件,如下图:
|
||||
以上操作步骤完成,则会生成的新的注册表YAML文件,如下图:
|
||||
|
||||

|
||||
|
||||
### 3.2 添加新生成注册表
|
||||
|
||||
在`unilabos\registry\devices`目录下新建一个 yaml 文件,此处新建文件命名为`coincellassemblyworkstation_device.yaml`,将上面生成的新的注册表信息粘贴到`coincellassemblyworkstation_device.yaml`文件中。
|
||||
|
||||
|
||||
|
||||
|
||||
### 3.2 添加新生成注册表
|
||||
在`unilabos\registry\devices`目录下新建一个yaml文件,此处新建文件命名为`coincellassemblyworkstation_device.yaml`,将上面生成的新的注册表信息粘贴到`coincellassemblyworkstation_device.yaml`文件中。
|
||||
|
||||
在终端输入以下命令进行注册表补全操作。
|
||||
|
||||
```bash
|
||||
python unilabos\app\register.py --complete_registry
|
||||
```
|
||||
|
||||
|
||||
### 3.3 启动并上传注册表
|
||||
|
||||
新增设备之后,启动 unilab 需要增加`--upload_registry`参数,来上传注册表信息。
|
||||
新增设备之后,启动unilab需要增加`--upload_registry`参数,来上传注册表信息。
|
||||
|
||||
```bash
|
||||
python unilabos\app\main.py -g celljson.json --ak <user的AK> --sk <user的SK> --upload_registry
|
||||
@@ -154,7 +159,6 @@ module: unilabos.devices.workstation.coin_cell_assembly.coin_cell_assembly:CoinC
|
||||
### 4.2 首次接入流程
|
||||
|
||||
首次新增设备(或资源)需要完整流程:
|
||||
|
||||
1. ✅ 在网页端生成注册表信息
|
||||
2. ✅ 使用 `--complete_registry` 补全注册表
|
||||
3. ✅ 使用 `--upload_registry` 上传注册表信息
|
||||
@@ -162,12 +166,11 @@ module: unilabos.devices.workstation.coin_cell_assembly.coin_cell_assembly:CoinC
|
||||
### 4.3 驱动更新流程
|
||||
|
||||
如果不是新增设备,仅修改了工站驱动的 `.py` 文件:
|
||||
|
||||
1. ✅ 运行 `--complete_registry` 补全注册表
|
||||
2. ✅ 运行 `--upload_registry` 上传注册表
|
||||
3. ❌ 不需要在网页端重新生成注册表
|
||||
|
||||
### 4.4 PLC 通信注意事项
|
||||
### 4.4 PLC通信注意事项
|
||||
|
||||
- **握手机制**:若需参数下发,建议在 PLC 端设置标志寄存器并完成握手复位,避免粘连与竞争
|
||||
- **字节序**:FLOAT32 等多字节数据类型需要正确指定字节序(如 `WorderOrder.LITTLE`)
|
||||
@@ -200,3 +203,5 @@ module: unilabos.devices.workstation.coin_cell_assembly.coin_cell_assembly:CoinC
|
||||
5. ✅ 新增设备与更新驱动的区别
|
||||
|
||||
这个案例展示了完整的 PLC 设备接入流程,可以作为其他类似设备接入的参考模板。
|
||||
|
||||
|
||||
|
||||
@@ -16,8 +16,8 @@
|
||||
|
||||
这类工站由开发者自研,组合所有子设备和实验耗材、希望让他们在工作站这一级协调配合;
|
||||
|
||||
1. 工作站包含大量已经注册的子设备,可能各自通信组态很不相同;部分设备可能会拥有同一个通信设备作为出口,如 2 个泵共用 1 个串口、所有设备共同接入 PLC 等。
|
||||
2. 任务系统是统一实现的 protocols,protocols 中会将高层指令处理成各子设备配合的工作流 json 并管理执行、同时更改物料信息
|
||||
1. 工作站包含大量已经注册的子设备,可能各自通信组态很不相同;部分设备可能会拥有同一个通信设备作为出口,如2个泵共用1个串口、所有设备共同接入PLC等。
|
||||
2. 任务系统是统一实现的 protocols,protocols 中会将高层指令处理成各子设备配合的工作流 json并管理执行、同时更改物料信息
|
||||
3. 物料系统较为简单直接,如常量有机化学仅为工作站内固定的瓶子,初始化时就已固定;随后在任务执行过程中,记录试剂量更改信息
|
||||
|
||||
### 0.2 移液工作站:物料系统和工作流模板管理
|
||||
@@ -35,7 +35,7 @@
|
||||
由厂家开发,具备完善的物料系统、任务系统甚至调度系统;由 PLC 或 OpenAPI TCP 协议统一通信
|
||||
|
||||
1. 在监控状态时,希望展现子设备的状态;但子设备仅为逻辑概念,通信由工作站上位机接口提供;部分情况下,子设备状态是被记录在文件中的,需要读取
|
||||
2. 工作站有自己的工作流系统甚至调度系统;可以通过脚本/PLC 连续读写来配置工作站可用的工作流;
|
||||
2. 工作站有自己的工作流系统甚至调度系统;可以通过脚本/PLC连续读写来配置工作站可用的工作流;
|
||||
3. 部分拥有完善的物料入库、出库、过程记录,需要与 Uni-Lab-OS 物料系统对接
|
||||
|
||||
## 1. 整体架构图
|
||||
@@ -49,7 +49,7 @@ graph TB
|
||||
RPN[ROS2WorkstationNode<br/>Protocol执行引擎]
|
||||
WB -.post_init关联.-> RPN
|
||||
end
|
||||
|
||||
|
||||
subgraph "物料管理系统"
|
||||
DECK[Deck<br/>PLR本地物料系统]
|
||||
RS[ResourceSynchronizer<br/>外部物料同步器]
|
||||
@@ -57,7 +57,7 @@ graph TB
|
||||
WB --> RS
|
||||
RS --> DECK
|
||||
end
|
||||
|
||||
|
||||
subgraph "通信与子设备管理"
|
||||
HW[hardware_interface<br/>硬件通信接口]
|
||||
SUBDEV[子设备集合<br/>pumps/grippers/sensors]
|
||||
@@ -65,7 +65,7 @@ graph TB
|
||||
RPN --> SUBDEV
|
||||
HW -.代理模式.-> RPN
|
||||
end
|
||||
|
||||
|
||||
subgraph "工作流任务系统"
|
||||
PROTO[Protocol定义<br/>LiquidHandling/PlateHandling]
|
||||
WORKFLOW[Workflow执行器<br/>步骤管理与编排]
|
||||
@@ -85,32 +85,32 @@ graph LR
|
||||
HW2[通信接口<br/>hardware_interface]
|
||||
HTTP[HTTP服务<br/>WorkstationHTTPService]
|
||||
end
|
||||
|
||||
|
||||
subgraph "外部物料系统"
|
||||
BIOYOND[Bioyond物料管理]
|
||||
LIMS[LIMS系统]
|
||||
WAREHOUSE[第三方仓储]
|
||||
end
|
||||
|
||||
|
||||
subgraph "外部硬件系统"
|
||||
PLC[PLC设备]
|
||||
SERIAL[串口设备]
|
||||
ROBOT[机械臂/机器人]
|
||||
end
|
||||
|
||||
|
||||
subgraph "云端系统"
|
||||
CLOUD[UniLab云端<br/>资源管理]
|
||||
MONITOR[监控与调度]
|
||||
end
|
||||
|
||||
|
||||
BIOYOND <-->|RPC双向同步| DECK2
|
||||
LIMS -->|HTTP报送| HTTP
|
||||
WAREHOUSE <-->|API对接| DECK2
|
||||
|
||||
|
||||
PLC <-->|Modbus TCP| HW2
|
||||
SERIAL <-->|串口通信| HW2
|
||||
ROBOT <-->|SDK/API| HW2
|
||||
|
||||
|
||||
WS -->|ROS消息| CLOUD
|
||||
CLOUD -->|任务下发| WS
|
||||
MONITOR -->|状态查询| WS
|
||||
@@ -123,40 +123,40 @@ graph TB
|
||||
subgraph "工作站基类"
|
||||
BASE[WorkstationBase<br/>抽象基类]
|
||||
end
|
||||
|
||||
|
||||
subgraph "Bioyond集成工作站"
|
||||
BW[BioyondWorkstation]
|
||||
BW_DECK[Deck + Warehouses]
|
||||
BW_SYNC[BioyondResourceSynchronizer]
|
||||
BW_HW[BioyondV1RPC]
|
||||
BW_HTTP[HTTP报送服务]
|
||||
|
||||
|
||||
BW --> BW_DECK
|
||||
BW --> BW_SYNC
|
||||
BW --> BW_HW
|
||||
BW --> BW_HTTP
|
||||
end
|
||||
|
||||
|
||||
subgraph "纯协议节点"
|
||||
PN[ProtocolNode]
|
||||
PN_SUB[子设备集合]
|
||||
PN_PROTO[Protocol工作流]
|
||||
|
||||
|
||||
PN --> PN_SUB
|
||||
PN --> PN_PROTO
|
||||
end
|
||||
|
||||
|
||||
subgraph "PLC控制工作站"
|
||||
PW[PLCWorkstation]
|
||||
PW_DECK[Deck物料系统]
|
||||
PW_PLC[Modbus PLC客户端]
|
||||
PW_WF[工作流定义]
|
||||
|
||||
|
||||
PW --> PW_DECK
|
||||
PW --> PW_PLC
|
||||
PW --> PW_WF
|
||||
end
|
||||
|
||||
|
||||
BASE -.继承.-> BW
|
||||
BASE -.继承.-> PN
|
||||
BASE -.继承.-> PW
|
||||
@@ -175,25 +175,25 @@ classDiagram
|
||||
+hardware_interface: Union[Any, str]
|
||||
+current_workflow_status: WorkflowStatus
|
||||
+supported_workflows: Dict[str, WorkflowInfo]
|
||||
|
||||
|
||||
+post_init(ros_node)*
|
||||
+set_hardware_interface(interface)
|
||||
+call_device_method(method, *args, **kwargs)
|
||||
+get_device_status()
|
||||
+is_device_available()
|
||||
|
||||
|
||||
+get_deck()
|
||||
+get_all_resources()
|
||||
+find_resource_by_name(name)
|
||||
+find_resources_by_type(type)
|
||||
+sync_with_external_system()
|
||||
|
||||
|
||||
+execute_workflow(name, params)
|
||||
+stop_workflow(emergency)
|
||||
+workflow_status
|
||||
+is_busy
|
||||
}
|
||||
|
||||
|
||||
class ROS2WorkstationNode {
|
||||
+device_id: str
|
||||
+children: Dict[str, Any]
|
||||
@@ -202,7 +202,7 @@ classDiagram
|
||||
+_action_clients: Dict
|
||||
+_action_servers: Dict
|
||||
+resource_tracker: DeviceNodeResourceTracker
|
||||
|
||||
|
||||
+initialize_device(device_id, config)
|
||||
+create_ros_action_server(action_name, mapping)
|
||||
+execute_single_action(device_id, action, kwargs)
|
||||
@@ -210,14 +210,14 @@ classDiagram
|
||||
+transfer_resource_to_another(resources, target, sites)
|
||||
+_setup_hardware_proxy(device, comm_device, read, write)
|
||||
}
|
||||
|
||||
|
||||
%% 物料管理相关类
|
||||
class Deck {
|
||||
+name: str
|
||||
+children: List
|
||||
+assign_child_resource()
|
||||
}
|
||||
|
||||
|
||||
class ResourceSynchronizer {
|
||||
<<abstract>>
|
||||
+workstation: WorkstationBase
|
||||
@@ -225,23 +225,23 @@ classDiagram
|
||||
+sync_to_external(plr_resource)*
|
||||
+handle_external_change(change_info)*
|
||||
}
|
||||
|
||||
|
||||
class BioyondResourceSynchronizer {
|
||||
+bioyond_api_client: BioyondV1RPC
|
||||
+sync_interval: int
|
||||
+last_sync_time: float
|
||||
|
||||
|
||||
+initialize()
|
||||
+sync_from_external()
|
||||
+sync_to_external(resource)
|
||||
+handle_external_change(change_info)
|
||||
}
|
||||
|
||||
|
||||
%% 硬件接口相关类
|
||||
class HardwareInterface {
|
||||
<<interface>>
|
||||
}
|
||||
|
||||
|
||||
class BioyondV1RPC {
|
||||
+base_url: str
|
||||
+api_key: str
|
||||
@@ -249,7 +249,7 @@ classDiagram
|
||||
+add_material()
|
||||
+material_inbound()
|
||||
}
|
||||
|
||||
|
||||
%% 服务类
|
||||
class WorkstationHTTPService {
|
||||
+workstation: WorkstationBase
|
||||
@@ -257,7 +257,7 @@ classDiagram
|
||||
+port: int
|
||||
+server: HTTPServer
|
||||
+running: bool
|
||||
|
||||
|
||||
+start()
|
||||
+stop()
|
||||
+_handle_step_finish_report()
|
||||
@@ -266,13 +266,13 @@ classDiagram
|
||||
+_handle_material_change_report()
|
||||
+_handle_error_handling_report()
|
||||
}
|
||||
|
||||
|
||||
%% 具体实现类
|
||||
class BioyondWorkstation {
|
||||
+bioyond_config: Dict
|
||||
+workflow_mappings: Dict
|
||||
+workflow_sequence: List
|
||||
|
||||
|
||||
+post_init(ros_node)
|
||||
+transfer_resource_to_another()
|
||||
+resource_tree_add(resources)
|
||||
@@ -280,25 +280,25 @@ classDiagram
|
||||
+get_all_workflows()
|
||||
+get_bioyond_status()
|
||||
}
|
||||
|
||||
|
||||
class ProtocolNode {
|
||||
+post_init(ros_node)
|
||||
}
|
||||
|
||||
|
||||
%% 核心关系
|
||||
WorkstationBase o-- ROS2WorkstationNode : post_init关联
|
||||
WorkstationBase o-- WorkstationHTTPService : 可选服务
|
||||
|
||||
|
||||
%% 物料管理侧
|
||||
WorkstationBase *-- Deck : deck
|
||||
WorkstationBase *-- ResourceSynchronizer : 可选组合
|
||||
ResourceSynchronizer <|-- BioyondResourceSynchronizer
|
||||
|
||||
|
||||
%% 硬件接口侧
|
||||
WorkstationBase o-- HardwareInterface : hardware_interface
|
||||
HardwareInterface <|.. BioyondV1RPC : 实现
|
||||
BioyondResourceSynchronizer --> BioyondV1RPC : 使用
|
||||
|
||||
|
||||
%% 继承关系
|
||||
BioyondWorkstation --|> WorkstationBase
|
||||
ProtocolNode --|> WorkstationBase
|
||||
@@ -316,49 +316,49 @@ sequenceDiagram
|
||||
participant HW as HardwareInterface
|
||||
participant ROS as ROS2WorkstationNode
|
||||
participant HTTP as HTTPService
|
||||
|
||||
|
||||
APP->>WS: 创建工作站实例(__init__)
|
||||
WS->>DECK: 初始化PLR Deck
|
||||
DECK->>DECK: 创建Warehouse等子资源
|
||||
DECK-->>WS: Deck创建完成
|
||||
|
||||
|
||||
WS->>HW: 创建硬件接口(如BioyondV1RPC)
|
||||
HW->>HW: 建立连接(PLC/RPC/串口等)
|
||||
HW-->>WS: 硬件接口就绪
|
||||
|
||||
|
||||
WS->>SYNC: 创建ResourceSynchronizer(可选)
|
||||
SYNC->>HW: 使用hardware_interface
|
||||
SYNC->>SYNC: 初始化同步配置
|
||||
SYNC-->>WS: 同步器创建完成
|
||||
|
||||
|
||||
WS->>SYNC: sync_from_external()
|
||||
SYNC->>HW: 查询外部物料系统
|
||||
HW-->>SYNC: 返回物料数据
|
||||
SYNC->>DECK: 转换并添加到Deck
|
||||
SYNC-->>WS: 同步完成
|
||||
|
||||
|
||||
Note over WS: __init__完成,等待ROS节点
|
||||
|
||||
|
||||
APP->>ROS: 初始化ROS2WorkstationNode
|
||||
ROS->>ROS: 初始化子设备(children)
|
||||
ROS->>ROS: 创建Action客户端
|
||||
ROS->>ROS: 设置硬件接口代理
|
||||
ROS-->>APP: ROS节点就绪
|
||||
|
||||
|
||||
APP->>WS: post_init(ros_node)
|
||||
WS->>WS: self._ros_node = ros_node
|
||||
WS->>ROS: update_resource([deck])
|
||||
ROS->>ROS: 上传物料到云端
|
||||
ROS-->>WS: 上传完成
|
||||
|
||||
|
||||
WS->>HTTP: 创建WorkstationHTTPService(可选)
|
||||
HTTP->>HTTP: 启动HTTP服务器线程
|
||||
HTTP-->>WS: HTTP服务启动
|
||||
|
||||
|
||||
WS-->>APP: 工作站完全就绪
|
||||
```
|
||||
|
||||
## 4. 工作流执行时序图(Protocol 模式)
|
||||
## 4. 工作流执行时序图(Protocol模式)
|
||||
|
||||
```{mermaid}
|
||||
sequenceDiagram
|
||||
@@ -369,15 +369,15 @@ sequenceDiagram
|
||||
participant DECK as PLR Deck
|
||||
participant CLOUD as 云端资源管理
|
||||
participant DEV as 子设备
|
||||
|
||||
|
||||
CLIENT->>ROS: 发送Protocol Action请求
|
||||
ROS->>ROS: execute_protocol回调
|
||||
ROS->>ROS: 从Goal提取参数
|
||||
ROS->>ROS: 调用protocol_steps_generator
|
||||
ROS->>ROS: 生成action步骤列表
|
||||
|
||||
|
||||
ROS->>WS: 更新workflow_status = RUNNING
|
||||
|
||||
|
||||
loop 执行每个步骤
|
||||
alt 调用子设备
|
||||
ROS->>ROS: execute_single_action(device_id, action, params)
|
||||
@@ -398,19 +398,19 @@ sequenceDiagram
|
||||
end
|
||||
WS-->>ROS: 返回结果
|
||||
end
|
||||
|
||||
|
||||
ROS->>DECK: 更新本地物料状态
|
||||
DECK->>DECK: 修改PLR资源属性
|
||||
end
|
||||
|
||||
|
||||
ROS->>CLOUD: 同步物料到云端(可选)
|
||||
CLOUD-->>ROS: 同步完成
|
||||
|
||||
|
||||
ROS->>WS: 更新workflow_status = COMPLETED
|
||||
ROS-->>CLIENT: 返回Protocol Result
|
||||
```
|
||||
|
||||
## 5. HTTP 报送处理时序图
|
||||
## 5. HTTP报送处理时序图
|
||||
|
||||
```{mermaid}
|
||||
sequenceDiagram
|
||||
@@ -420,25 +420,25 @@ sequenceDiagram
|
||||
participant DECK as PLR Deck
|
||||
participant SYNC as ResourceSynchronizer
|
||||
participant CLOUD as 云端
|
||||
|
||||
|
||||
EXT->>HTTP: POST /report/step_finish
|
||||
HTTP->>HTTP: 解析请求数据
|
||||
HTTP->>HTTP: 验证LIMS协议字段
|
||||
HTTP->>WS: process_step_finish_report(request)
|
||||
|
||||
|
||||
WS->>WS: 增加接收计数(_reports_received_count++)
|
||||
WS->>WS: 记录步骤完成事件
|
||||
WS->>DECK: 更新相关物料状态(可选)
|
||||
DECK->>DECK: 修改PLR资源状态
|
||||
|
||||
|
||||
WS->>WS: 保存报送记录到内存
|
||||
|
||||
|
||||
WS-->>HTTP: 返回处理结果
|
||||
HTTP->>HTTP: 构造HTTP响应
|
||||
HTTP-->>EXT: 200 OK + acknowledgment_id
|
||||
|
||||
|
||||
Note over EXT,CLOUD: 类似处理sample_finish, order_finish等报送
|
||||
|
||||
|
||||
alt 物料变更报送
|
||||
EXT->>HTTP: POST /report/material_change
|
||||
HTTP->>WS: process_material_change_report(data)
|
||||
@@ -463,7 +463,7 @@ sequenceDiagram
|
||||
participant HW as HardwareInterface
|
||||
participant HTTP as HTTPService
|
||||
participant LOG as 日志系统
|
||||
|
||||
|
||||
alt 设备错误(ROS Action失败)
|
||||
DEV->>ROS: Action返回失败结果
|
||||
ROS->>ROS: 记录错误信息
|
||||
@@ -475,7 +475,7 @@ sequenceDiagram
|
||||
WS->>WS: 记录错误历史
|
||||
WS->>LOG: 记录错误日志
|
||||
end
|
||||
|
||||
|
||||
alt 关键错误需要停止
|
||||
WS->>ROS: stop_workflow(emergency=True)
|
||||
ROS->>ROS: 取消所有进行中的Action
|
||||
@@ -487,44 +487,44 @@ sequenceDiagram
|
||||
WS->>ROS: 触发重试逻辑(可选)
|
||||
ROS->>DEV: 重新发送Action
|
||||
end
|
||||
|
||||
|
||||
WS-->>HTTP: 返回错误处理结果
|
||||
HTTP-->>DEV: 200 OK + 处理状态
|
||||
```
|
||||
|
||||
## 7. 典型工作站实现示例
|
||||
|
||||
### 7.1 Bioyond 集成工作站实现
|
||||
### 7.1 Bioyond集成工作站实现
|
||||
|
||||
```python
|
||||
class BioyondWorkstation(WorkstationBase):
|
||||
def __init__(self, bioyond_config: Dict, deck: Deck, *args, **kwargs):
|
||||
# 初始化deck
|
||||
super().__init__(deck=deck, *args, **kwargs)
|
||||
|
||||
|
||||
# 设置硬件接口为Bioyond RPC客户端
|
||||
self.hardware_interface = BioyondV1RPC(bioyond_config)
|
||||
|
||||
|
||||
# 创建资源同步器
|
||||
self.resource_synchronizer = BioyondResourceSynchronizer(self)
|
||||
|
||||
|
||||
# 从Bioyond同步物料到本地deck
|
||||
self.resource_synchronizer.sync_from_external()
|
||||
|
||||
|
||||
# 配置工作流
|
||||
self.workflow_mappings = bioyond_config.get("workflow_mappings", {})
|
||||
|
||||
|
||||
def post_init(self, ros_node: ROS2WorkstationNode):
|
||||
"""ROS节点就绪后的初始化"""
|
||||
self._ros_node = ros_node
|
||||
|
||||
|
||||
# 上传deck(包括所有物料)到云端
|
||||
ROS2DeviceNode.run_async_func(
|
||||
self._ros_node.update_resource,
|
||||
True,
|
||||
self._ros_node.update_resource,
|
||||
True,
|
||||
resources=[self.deck]
|
||||
)
|
||||
|
||||
|
||||
def resource_tree_add(self, resources: List[ResourcePLR]):
|
||||
"""添加物料并同步到Bioyond"""
|
||||
for resource in resources:
|
||||
@@ -537,24 +537,24 @@ class BioyondWorkstation(WorkstationBase):
|
||||
```python
|
||||
class ProtocolNode(WorkstationBase):
|
||||
"""纯协议节点,不需要物料管理和外部通信"""
|
||||
|
||||
|
||||
def __init__(self, deck: Optional[Deck] = None, *args, **kwargs):
|
||||
super().__init__(deck=deck, *args, **kwargs)
|
||||
# 不设置hardware_interface和resource_synchronizer
|
||||
# 所有功能通过子设备协同完成
|
||||
|
||||
|
||||
def post_init(self, ros_node: ROS2WorkstationNode):
|
||||
self._ros_node = ros_node
|
||||
# 不需要上传物料或其他初始化
|
||||
```
|
||||
|
||||
### 7.3 PLC 直接控制工作站
|
||||
### 7.3 PLC直接控制工作站
|
||||
|
||||
```python
|
||||
class PLCWorkstation(WorkstationBase):
|
||||
def __init__(self, plc_config: Dict, deck: Deck, *args, **kwargs):
|
||||
super().__init__(deck=deck, *args, **kwargs)
|
||||
|
||||
|
||||
# 设置硬件接口为Modbus客户端
|
||||
from pymodbus.client import ModbusTcpClient
|
||||
self.hardware_interface = ModbusTcpClient(
|
||||
@@ -562,7 +562,7 @@ class PLCWorkstation(WorkstationBase):
|
||||
port=plc_config["port"]
|
||||
)
|
||||
self.hardware_interface.connect()
|
||||
|
||||
|
||||
# 定义支持的工作流
|
||||
self.supported_workflows = {
|
||||
"battery_assembly": WorkflowInfo(
|
||||
@@ -574,49 +574,49 @@ class PLCWorkstation(WorkstationBase):
|
||||
parameters_schema={"quantity": int, "model": str}
|
||||
)
|
||||
}
|
||||
|
||||
|
||||
def execute_workflow(self, workflow_name: str, parameters: Dict):
|
||||
"""通过PLC执行工作流"""
|
||||
workflow_id = self._get_workflow_id(workflow_name)
|
||||
|
||||
|
||||
# 写入PLC寄存器启动工作流
|
||||
self.hardware_interface.write_register(100, workflow_id)
|
||||
self.hardware_interface.write_register(101, parameters["quantity"])
|
||||
|
||||
|
||||
self.current_workflow_status = WorkflowStatus.RUNNING
|
||||
return True
|
||||
```
|
||||
|
||||
## 8. 核心接口说明
|
||||
|
||||
### 8.1 WorkstationBase 核心属性
|
||||
### 8.1 WorkstationBase核心属性
|
||||
|
||||
| 属性 | 类型 | 说明 |
|
||||
| ------------------------- | ----------------------- | ------------------------------- |
|
||||
| `_ros_node` | ROS2WorkstationNode | ROS 节点引用,由 post_init 设置 |
|
||||
| `deck` | Deck | PyLabRobot Deck,本地物料系统 |
|
||||
| `plr_resources` | Dict[str, PLRResource] | 物料资源映射 |
|
||||
| `resource_synchronizer` | ResourceSynchronizer | 外部物料同步器(可选) |
|
||||
| `hardware_interface` | Union[Any, str] | 硬件接口或代理字符串 |
|
||||
| `current_workflow_status` | WorkflowStatus | 当前工作流状态 |
|
||||
| `supported_workflows` | Dict[str, WorkflowInfo] | 支持的工作流定义 |
|
||||
| 属性 | 类型 | 说明 |
|
||||
| --------------------------- | ----------------------- | ----------------------------- |
|
||||
| `_ros_node` | ROS2WorkstationNode | ROS节点引用,由post_init设置 |
|
||||
| `deck` | Deck | PyLabRobot Deck,本地物料系统 |
|
||||
| `plr_resources` | Dict[str, PLRResource] | 物料资源映射 |
|
||||
| `resource_synchronizer` | ResourceSynchronizer | 外部物料同步器(可选) |
|
||||
| `hardware_interface` | Union[Any, str] | 硬件接口或代理字符串 |
|
||||
| `current_workflow_status` | WorkflowStatus | 当前工作流状态 |
|
||||
| `supported_workflows` | Dict[str, WorkflowInfo] | 支持的工作流定义 |
|
||||
|
||||
### 8.2 必须实现的方法
|
||||
|
||||
- `post_init(ros_node)`: ROS 节点就绪后的初始化,必须实现
|
||||
- `post_init(ros_node)`: ROS节点就绪后的初始化,必须实现
|
||||
|
||||
### 8.3 硬件接口相关方法
|
||||
|
||||
- `set_hardware_interface(interface)`: 设置硬件接口
|
||||
- `call_device_method(method, *args, **kwargs)`: 统一设备方法调用
|
||||
- 支持直接模式: 直接调用 hardware_interface 的方法
|
||||
- 支持代理模式: hardware_interface="proxy:device_id"通过 ROS 转发
|
||||
- 支持直接模式: 直接调用hardware_interface的方法
|
||||
- 支持代理模式: hardware_interface="proxy:device_id"通过ROS转发
|
||||
- `get_device_status()`: 获取设备状态
|
||||
- `is_device_available()`: 检查设备可用性
|
||||
|
||||
### 8.4 物料管理方法
|
||||
|
||||
- `get_deck()`: 获取 PLR Deck
|
||||
- `get_deck()`: 获取PLR Deck
|
||||
- `get_all_resources()`: 获取所有物料
|
||||
- `find_resource_by_name(name)`: 按名称查找物料
|
||||
- `find_resources_by_type(type)`: 按类型查找物料
|
||||
@@ -630,7 +630,7 @@ class PLCWorkstation(WorkstationBase):
|
||||
- `is_busy`: 检查是否忙碌(属性)
|
||||
- `workflow_runtime`: 获取运行时间(属性)
|
||||
|
||||
### 8.6 可选的 HTTP 报送处理方法
|
||||
### 8.6 可选的HTTP报送处理方法
|
||||
|
||||
- `process_step_finish_report()`: 步骤完成处理
|
||||
- `process_sample_finish_report()`: 样本完成处理
|
||||
@@ -638,10 +638,10 @@ class PLCWorkstation(WorkstationBase):
|
||||
- `process_material_change_report()`: 物料变更处理
|
||||
- `handle_external_error()`: 错误处理
|
||||
|
||||
### 8.7 ROS2WorkstationNode 核心方法
|
||||
### 8.7 ROS2WorkstationNode核心方法
|
||||
|
||||
- `initialize_device(device_id, config)`: 初始化子设备
|
||||
- `create_ros_action_server(action_name, mapping)`: 创建 Action 服务器
|
||||
- `create_ros_action_server(action_name, mapping)`: 创建Action服务器
|
||||
- `execute_single_action(device_id, action, kwargs)`: 执行单个动作
|
||||
- `update_resource(resources)`: 同步物料到云端
|
||||
- `transfer_resource_to_another(...)`: 跨设备物料转移
|
||||
@@ -698,7 +698,7 @@ workstation = BioyondWorkstation(
|
||||
"config": {...}
|
||||
},
|
||||
"gripper_1": {
|
||||
"type": "device",
|
||||
"type": "device",
|
||||
"driver": "RobotiqGripperDriver",
|
||||
"communication": "io_modbus_1",
|
||||
"config": {...}
|
||||
@@ -720,7 +720,7 @@ workstation = BioyondWorkstation(
|
||||
}
|
||||
```
|
||||
|
||||
### 9.3 HTTP 服务配置
|
||||
### 9.3 HTTP服务配置
|
||||
|
||||
```python
|
||||
from unilabos.devices.workstation.workstation_http_service import WorkstationHTTPService
|
||||
@@ -741,31 +741,31 @@ http_service.start()
|
||||
### 10.1 清晰的职责分离
|
||||
|
||||
- **WorkstationBase**: 负责物料管理(deck)、硬件接口(hardware_interface)、工作流状态管理
|
||||
- **ROS2WorkstationNode**: 负责子设备管理、Protocol 执行、云端物料同步
|
||||
- **ResourceSynchronizer**: 可选的外部物料系统同步(如 Bioyond)
|
||||
- **WorkstationHTTPService**: 可选的 HTTP 报送接收服务
|
||||
- **ROS2WorkstationNode**: 负责子设备管理、Protocol执行、云端物料同步
|
||||
- **ResourceSynchronizer**: 可选的外部物料系统同步(如Bioyond)
|
||||
- **WorkstationHTTPService**: 可选的HTTP报送接收服务
|
||||
|
||||
### 10.2 灵活的硬件接口模式
|
||||
|
||||
1. **直接模式**: hardware_interface 是具体对象(如 BioyondV1RPC、ModbusClient)
|
||||
2. **代理模式**: hardware_interface="proxy:device_id",通过 ROS 节点转发到子设备
|
||||
1. **直接模式**: hardware_interface是具体对象(如BioyondV1RPC、ModbusClient)
|
||||
2. **代理模式**: hardware_interface="proxy:device_id",通过ROS节点转发到子设备
|
||||
3. **混合模式**: 工作站有自己的接口,同时管理多个子设备
|
||||
|
||||
### 10.3 统一的物料系统
|
||||
|
||||
- 基于 PyLabRobot Deck 的标准化物料表示
|
||||
- 通过 ResourceSynchronizer 实现与外部系统(如 Bioyond、LIMS)的双向同步
|
||||
- 通过 ROS2WorkstationNode 实现与云端的物料状态同步
|
||||
- 基于PyLabRobot Deck的标准化物料表示
|
||||
- 通过ResourceSynchronizer实现与外部系统(如Bioyond、LIMS)的双向同步
|
||||
- 通过ROS2WorkstationNode实现与云端的物料状态同步
|
||||
|
||||
### 10.4 Protocol 驱动的工作流
|
||||
### 10.4 Protocol驱动的工作流
|
||||
|
||||
- ROS2WorkstationNode 负责 Protocol 的执行和步骤管理
|
||||
- 支持子设备协同(通过 Action Client 调用)
|
||||
- 支持工作站直接控制(通过 hardware_interface)
|
||||
- ROS2WorkstationNode负责Protocol的执行和步骤管理
|
||||
- 支持子设备协同(通过Action Client调用)
|
||||
- 支持工作站直接控制(通过hardware_interface)
|
||||
|
||||
### 10.5 可选的 HTTP 报送服务
|
||||
### 10.5 可选的HTTP报送服务
|
||||
|
||||
- 基于 LIMS 协议规范的统一报送接口
|
||||
- 基于LIMS协议规范的统一报送接口
|
||||
- 支持步骤完成、样本完成、任务完成、物料变更等多种报送类型
|
||||
- 与工作站解耦,可独立启停
|
||||
|
||||
|
||||
@@ -1,334 +0,0 @@
|
||||
# HTTP API 指南
|
||||
|
||||
本文档介绍如何通过 HTTP API 与 Uni-Lab-OS 进行交互,包括查询设备、提交任务和获取结果。
|
||||
|
||||
## 概述
|
||||
|
||||
Uni-Lab-OS 提供 RESTful HTTP API,允许外部系统通过标准 HTTP 请求控制实验室设备。API 基于 FastAPI 构建,默认运行在 `http://localhost:8002`。
|
||||
|
||||
### 基础信息
|
||||
|
||||
- **Base URL**: `http://localhost:8002/api/v1`
|
||||
- **Content-Type**: `application/json`
|
||||
- **响应格式**: JSON
|
||||
|
||||
### 通用响应结构
|
||||
|
||||
```json
|
||||
{
|
||||
"code": 0,
|
||||
"data": { ... },
|
||||
"message": "success"
|
||||
}
|
||||
```
|
||||
|
||||
| 字段 | 类型 | 说明 |
|
||||
| --------- | ------ | ------------------ |
|
||||
| `code` | int | 状态码,0 表示成功 |
|
||||
| `data` | object | 响应数据 |
|
||||
| `message` | string | 响应消息 |
|
||||
|
||||
## 快速开始
|
||||
|
||||
以下是一个完整的工作流示例:查询设备 → 获取动作 → 提交任务 → 获取结果。
|
||||
|
||||
### 步骤 1: 获取在线设备
|
||||
|
||||
```bash
|
||||
curl -X GET "http://localhost:8002/api/v1/online-devices"
|
||||
```
|
||||
|
||||
**响应示例**:
|
||||
|
||||
```json
|
||||
{
|
||||
"code": 0,
|
||||
"data": {
|
||||
"online_devices": {
|
||||
"host_node": {
|
||||
"device_key": "/host_node",
|
||||
"namespace": "",
|
||||
"machine_name": "本地",
|
||||
"uuid": "xxx-xxx-xxx",
|
||||
"node_name": "host_node"
|
||||
}
|
||||
},
|
||||
"total_count": 1,
|
||||
"timestamp": 1732612345.123
|
||||
},
|
||||
"message": "success"
|
||||
}
|
||||
```
|
||||
|
||||
### 步骤 2: 获取设备可用动作
|
||||
|
||||
```bash
|
||||
curl -X GET "http://localhost:8002/api/v1/devices/host_node/actions"
|
||||
```
|
||||
|
||||
**响应示例**:
|
||||
|
||||
```json
|
||||
{
|
||||
"code": 0,
|
||||
"data": {
|
||||
"device_id": "host_node",
|
||||
"actions": {
|
||||
"test_latency": {
|
||||
"type_name": "unilabos_msgs.action._empty_in.EmptyIn",
|
||||
"type_name_convert": "unilabos_msgs/action/_empty_in/EmptyIn",
|
||||
"action_path": "/devices/host_node/test_latency",
|
||||
"goal_info": "{}",
|
||||
"is_busy": false,
|
||||
"current_job_id": null
|
||||
},
|
||||
"create_resource": {
|
||||
"type_name": "unilabos_msgs.action._resource_create_from_outer_easy.ResourceCreateFromOuterEasy",
|
||||
"action_path": "/devices/host_node/create_resource",
|
||||
"goal_info": "{res_id: '', device_id: '', class_name: '', ...}",
|
||||
"is_busy": false,
|
||||
"current_job_id": null
|
||||
}
|
||||
},
|
||||
"action_count": 5
|
||||
},
|
||||
"message": "success"
|
||||
}
|
||||
```
|
||||
|
||||
**动作状态字段说明**:
|
||||
|
||||
| 字段 | 说明 |
|
||||
| ---------------- | ----------------------------- |
|
||||
| `type_name` | 动作类型的完整名称 |
|
||||
| `action_path` | ROS2 动作路径 |
|
||||
| `goal_info` | 动作参数模板 |
|
||||
| `is_busy` | 动作是否正在执行 |
|
||||
| `current_job_id` | 当前执行的任务 ID(如果繁忙) |
|
||||
|
||||
### 步骤 3: 提交任务
|
||||
|
||||
```bash
|
||||
curl -X POST "http://localhost:8002/api/v1/job/add" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"device_id":"host_node","action":"test_latency","action_args":{}}'
|
||||
```
|
||||
|
||||
**请求体**:
|
||||
|
||||
```json
|
||||
{
|
||||
"device_id": "host_node",
|
||||
"action": "test_latency",
|
||||
"action_args": {}
|
||||
}
|
||||
```
|
||||
|
||||
**请求参数说明**:
|
||||
|
||||
| 字段 | 类型 | 必填 | 说明 |
|
||||
| ------------- | ------ | ---- | ---------------------------------- |
|
||||
| `device_id` | string | ✓ | 目标设备 ID |
|
||||
| `action` | string | ✓ | 动作名称 |
|
||||
| `action_args` | object | ✓ | 动作参数(根据动作类型不同而变化) |
|
||||
|
||||
**响应示例**:
|
||||
|
||||
```json
|
||||
{
|
||||
"code": 0,
|
||||
"data": {
|
||||
"jobId": "b6acb586-733a-42ab-9f73-55c9a52aa8bd",
|
||||
"status": 1,
|
||||
"result": {}
|
||||
},
|
||||
"message": "success"
|
||||
}
|
||||
```
|
||||
|
||||
**任务状态码**:
|
||||
|
||||
| 状态码 | 含义 | 说明 |
|
||||
| ------ | --------- | ------------------------------ |
|
||||
| 0 | UNKNOWN | 未知状态 |
|
||||
| 1 | ACCEPTED | 任务已接受,等待执行 |
|
||||
| 2 | EXECUTING | 任务执行中 |
|
||||
| 3 | CANCELING | 任务取消中 |
|
||||
| 4 | SUCCEEDED | 任务成功完成 |
|
||||
| 5 | CANCELED | 任务已取消 |
|
||||
| 6 | ABORTED | 任务中止(设备繁忙或执行失败) |
|
||||
|
||||
### 步骤 4: 查询任务状态和结果
|
||||
|
||||
```bash
|
||||
curl -X GET "http://localhost:8002/api/v1/job/b6acb586-733a-42ab-9f73-55c9a52aa8bd/status"
|
||||
```
|
||||
|
||||
**响应示例(执行中)**:
|
||||
|
||||
```json
|
||||
{
|
||||
"code": 0,
|
||||
"data": {
|
||||
"jobId": "b6acb586-733a-42ab-9f73-55c9a52aa8bd",
|
||||
"status": 2,
|
||||
"result": {}
|
||||
},
|
||||
"message": "success"
|
||||
}
|
||||
```
|
||||
|
||||
**响应示例(执行完成)**:
|
||||
|
||||
```json
|
||||
{
|
||||
"code": 0,
|
||||
"data": {
|
||||
"jobId": "b6acb586-733a-42ab-9f73-55c9a52aa8bd",
|
||||
"status": 4,
|
||||
"result": {
|
||||
"error": "",
|
||||
"suc": true,
|
||||
"return_value": {
|
||||
"avg_rtt_ms": 103.99,
|
||||
"avg_time_diff_ms": 7181.55,
|
||||
"max_time_error_ms": 7210.57,
|
||||
"task_delay_ms": -1,
|
||||
"raw_delay_ms": 33.19,
|
||||
"test_count": 5,
|
||||
"status": "success"
|
||||
}
|
||||
}
|
||||
},
|
||||
"message": "success"
|
||||
}
|
||||
```
|
||||
|
||||
> **注意**: 任务结果在首次查询后会被自动删除,请确保保存返回的结果数据。
|
||||
|
||||
## API 端点列表
|
||||
|
||||
### 设备相关
|
||||
|
||||
| 端点 | 方法 | 说明 |
|
||||
| ---------------------------------------------------------- | ---- | ---------------------- |
|
||||
| `/api/v1/online-devices` | GET | 获取在线设备列表 |
|
||||
| `/api/v1/devices` | GET | 获取设备配置 |
|
||||
| `/api/v1/devices/{device_id}/actions` | GET | 获取指定设备的可用动作 |
|
||||
| `/api/v1/devices/{device_id}/actions/{action_name}/schema` | GET | 获取动作参数 Schema |
|
||||
| `/api/v1/actions` | GET | 获取所有设备的可用动作 |
|
||||
|
||||
### 任务相关
|
||||
|
||||
| 端点 | 方法 | 说明 |
|
||||
| ----------------------------- | ---- | ------------------ |
|
||||
| `/api/v1/job/add` | POST | 提交新任务 |
|
||||
| `/api/v1/job/{job_id}/status` | GET | 查询任务状态和结果 |
|
||||
|
||||
### 资源相关
|
||||
|
||||
| 端点 | 方法 | 说明 |
|
||||
| ------------------- | ---- | ------------ |
|
||||
| `/api/v1/resources` | GET | 获取资源列表 |
|
||||
|
||||
## 常见动作示例
|
||||
|
||||
### test_latency - 延迟测试
|
||||
|
||||
测试系统延迟,无需参数。
|
||||
|
||||
```bash
|
||||
curl -X POST "http://localhost:8002/api/v1/job/add" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"device_id":"host_node","action":"test_latency","action_args":{}}'
|
||||
```
|
||||
|
||||
### create_resource - 创建资源
|
||||
|
||||
在设备上创建新资源。
|
||||
|
||||
```bash
|
||||
curl -X POST "http://localhost:8002/api/v1/job/add" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"device_id": "host_node",
|
||||
"action": "create_resource",
|
||||
"action_args": {
|
||||
"res_id": "my_plate",
|
||||
"device_id": "host_node",
|
||||
"class_name": "Plate",
|
||||
"parent": "deck",
|
||||
"bind_locations": {"x": 0, "y": 0, "z": 0}
|
||||
}
|
||||
}'
|
||||
```
|
||||
|
||||
## 错误处理
|
||||
|
||||
### 设备繁忙
|
||||
|
||||
当设备正在执行其他任务时,提交新任务会返回 `status: 6`(ABORTED):
|
||||
|
||||
```json
|
||||
{
|
||||
"code": 0,
|
||||
"data": {
|
||||
"jobId": "xxx",
|
||||
"status": 6,
|
||||
"result": {}
|
||||
},
|
||||
"message": "success"
|
||||
}
|
||||
```
|
||||
|
||||
此时应等待当前任务完成后重试,或使用 `/devices/{device_id}/actions` 检查动作的 `is_busy` 状态。
|
||||
|
||||
### 参数错误
|
||||
|
||||
```json
|
||||
{
|
||||
"code": 2002,
|
||||
"data": { ... },
|
||||
"message": "device_id is required"
|
||||
}
|
||||
```
|
||||
|
||||
## 轮询策略
|
||||
|
||||
推荐的任务状态轮询策略:
|
||||
|
||||
```python
|
||||
import requests
|
||||
import time
|
||||
|
||||
def wait_for_job(job_id, timeout=60, interval=0.5):
|
||||
"""等待任务完成并返回结果"""
|
||||
start_time = time.time()
|
||||
|
||||
while time.time() - start_time < timeout:
|
||||
response = requests.get(f"http://localhost:8002/api/v1/job/{job_id}/status")
|
||||
data = response.json()["data"]
|
||||
|
||||
status = data["status"]
|
||||
if status in (4, 5, 6): # SUCCEEDED, CANCELED, ABORTED
|
||||
return data
|
||||
|
||||
time.sleep(interval)
|
||||
|
||||
raise TimeoutError(f"Job {job_id} did not complete within {timeout} seconds")
|
||||
|
||||
# 使用示例
|
||||
response = requests.post(
|
||||
"http://localhost:8002/api/v1/job/add",
|
||||
json={"device_id": "host_node", "action": "test_latency", "action_args": {}}
|
||||
)
|
||||
job_id = response.json()["data"]["jobId"]
|
||||
result = wait_for_job(job_id)
|
||||
print(result)
|
||||
```
|
||||
|
||||
## 相关文档
|
||||
|
||||
- [设备注册指南](add_device.md)
|
||||
- [动作定义指南](add_action.md)
|
||||
- [网络架构概述](networking_overview.md)
|
||||
@@ -592,3 +592,4 @@ ros2 topic list
|
||||
- [ROS2 网络配置](https://docs.ros.org/en/humble/Tutorials/Advanced/Networking.html)
|
||||
- [DDS 配置](https://fast-dds.docs.eprosima.com/)
|
||||
- Uni-Lab 云平台文档
|
||||
|
||||
|
||||
@@ -7,17 +7,3 @@ Uni-Lab-OS 是一个开源的实验室自动化操作系统,提供统一的设
|
||||
|
||||
intro.md
|
||||
```
|
||||
|
||||
## 开发者指南
|
||||
|
||||
```{toctree}
|
||||
:maxdepth: 2
|
||||
|
||||
developer_guide/http_api.md
|
||||
developer_guide/networking_overview.md
|
||||
developer_guide/add_device.md
|
||||
developer_guide/add_action.md
|
||||
developer_guide/add_registry.md
|
||||
developer_guide/add_yaml.md
|
||||
developer_guide/action_includes.md
|
||||
```
|
||||
|
||||
BIN
docs/logo.png
BIN
docs/logo.png
Binary file not shown.
|
Before Width: | Height: | Size: 262 KiB After Width: | Height: | Size: 326 KiB |
@@ -1,6 +1,6 @@
|
||||
package:
|
||||
name: ros-humble-unilabos-msgs
|
||||
version: 0.10.12
|
||||
version: 0.10.11
|
||||
source:
|
||||
path: ../../unilabos_msgs
|
||||
target_directory: src
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
package:
|
||||
name: unilabos
|
||||
version: "0.10.12"
|
||||
version: "0.10.11"
|
||||
|
||||
source:
|
||||
path: ../..
|
||||
|
||||
@@ -2,6 +2,7 @@ import json
|
||||
import logging
|
||||
import traceback
|
||||
import uuid
|
||||
import xml.etree.ElementTree as ET
|
||||
from typing import Any, Dict, List
|
||||
|
||||
import networkx as nx
|
||||
@@ -24,15 +25,7 @@ class SimpleGraph:
|
||||
|
||||
def add_edge(self, source, target, **attrs):
|
||||
"""添加边"""
|
||||
# edge = {"source": source, "target": target, **attrs}
|
||||
edge = {
|
||||
"source": source, "target": target,
|
||||
"source_node_uuid": source,
|
||||
"target_node_uuid": target,
|
||||
"source_handle_io": "source",
|
||||
"target_handle_io": "target",
|
||||
**attrs
|
||||
}
|
||||
edge = {"source": source, "target": target, **attrs}
|
||||
self.edges.append(edge)
|
||||
|
||||
def to_dict(self):
|
||||
@@ -49,7 +42,6 @@ class SimpleGraph:
|
||||
"multigraph": False,
|
||||
"graph": {},
|
||||
"nodes": nodes_list,
|
||||
"edges": self.edges,
|
||||
"links": self.edges,
|
||||
}
|
||||
|
||||
@@ -66,8 +58,495 @@ def extract_json_from_markdown(text: str) -> str:
|
||||
return text
|
||||
|
||||
|
||||
def convert_to_type(val: str) -> Any:
|
||||
"""将字符串值转换为适当的数据类型"""
|
||||
if val == "True":
|
||||
return True
|
||||
if val == "False":
|
||||
return False
|
||||
if val == "?":
|
||||
return None
|
||||
if val.endswith(" g"):
|
||||
return float(val.split(" ")[0])
|
||||
if val.endswith("mg"):
|
||||
return float(val.split("mg")[0])
|
||||
elif val.endswith("mmol"):
|
||||
return float(val.split("mmol")[0]) / 1000
|
||||
elif val.endswith("mol"):
|
||||
return float(val.split("mol")[0])
|
||||
elif val.endswith("ml"):
|
||||
return float(val.split("ml")[0])
|
||||
elif val.endswith("RPM"):
|
||||
return float(val.split("RPM")[0])
|
||||
elif val.endswith(" °C"):
|
||||
return float(val.split(" ")[0])
|
||||
elif val.endswith(" %"):
|
||||
return float(val.split(" ")[0])
|
||||
return val
|
||||
|
||||
|
||||
def refactor_data(data: List[Dict[str, Any]]) -> List[Dict[str, Any]]:
|
||||
"""统一的数据重构函数,根据操作类型自动选择模板"""
|
||||
refactored_data = []
|
||||
|
||||
# 定义操作映射,包含生物实验和有机化学的所有操作
|
||||
OPERATION_MAPPING = {
|
||||
# 生物实验操作
|
||||
"transfer_liquid": "SynBioFactory-liquid_handler.prcxi-transfer_liquid",
|
||||
"transfer": "SynBioFactory-liquid_handler.biomek-transfer",
|
||||
"incubation": "SynBioFactory-liquid_handler.biomek-incubation",
|
||||
"move_labware": "SynBioFactory-liquid_handler.biomek-move_labware",
|
||||
"oscillation": "SynBioFactory-liquid_handler.biomek-oscillation",
|
||||
# 有机化学操作
|
||||
"HeatChillToTemp": "SynBioFactory-workstation-HeatChillProtocol",
|
||||
"StopHeatChill": "SynBioFactory-workstation-HeatChillStopProtocol",
|
||||
"StartHeatChill": "SynBioFactory-workstation-HeatChillStartProtocol",
|
||||
"HeatChill": "SynBioFactory-workstation-HeatChillProtocol",
|
||||
"Dissolve": "SynBioFactory-workstation-DissolveProtocol",
|
||||
"Transfer": "SynBioFactory-workstation-TransferProtocol",
|
||||
"Evaporate": "SynBioFactory-workstation-EvaporateProtocol",
|
||||
"Recrystallize": "SynBioFactory-workstation-RecrystallizeProtocol",
|
||||
"Filter": "SynBioFactory-workstation-FilterProtocol",
|
||||
"Dry": "SynBioFactory-workstation-DryProtocol",
|
||||
"Add": "SynBioFactory-workstation-AddProtocol",
|
||||
}
|
||||
|
||||
UNSUPPORTED_OPERATIONS = ["Purge", "Wait", "Stir", "ResetHandling"]
|
||||
|
||||
for step in data:
|
||||
operation = step.get("action")
|
||||
if not operation or operation in UNSUPPORTED_OPERATIONS:
|
||||
continue
|
||||
|
||||
# 处理重复操作
|
||||
if operation == "Repeat":
|
||||
times = step.get("times", step.get("parameters", {}).get("times", 1))
|
||||
sub_steps = step.get("steps", step.get("parameters", {}).get("steps", []))
|
||||
for i in range(int(times)):
|
||||
sub_data = refactor_data(sub_steps)
|
||||
refactored_data.extend(sub_data)
|
||||
continue
|
||||
|
||||
# 获取模板名称
|
||||
template = OPERATION_MAPPING.get(operation)
|
||||
if not template:
|
||||
# 自动推断模板类型
|
||||
if operation.lower() in ["transfer", "incubation", "move_labware", "oscillation"]:
|
||||
template = f"SynBioFactory-liquid_handler.biomek-{operation}"
|
||||
else:
|
||||
template = f"SynBioFactory-workstation-{operation}Protocol"
|
||||
|
||||
# 创建步骤数据
|
||||
step_data = {
|
||||
"template": template,
|
||||
"description": step.get("description", step.get("purpose", f"{operation} operation")),
|
||||
"lab_node_type": "Device",
|
||||
"parameters": step.get("parameters", step.get("action_args", {})),
|
||||
}
|
||||
refactored_data.append(step_data)
|
||||
|
||||
return refactored_data
|
||||
|
||||
|
||||
def build_protocol_graph(
|
||||
labware_info: List[Dict[str, Any]], protocol_steps: List[Dict[str, Any]], workstation_name: str
|
||||
) -> SimpleGraph:
|
||||
"""统一的协议图构建函数,根据设备类型自动选择构建逻辑"""
|
||||
G = SimpleGraph()
|
||||
resource_last_writer = {}
|
||||
LAB_NAME = "SynBioFactory"
|
||||
|
||||
protocol_steps = refactor_data(protocol_steps)
|
||||
|
||||
# 检查协议步骤中的模板来判断协议类型
|
||||
has_biomek_template = any(
|
||||
("biomek" in step.get("template", "")) or ("prcxi" in step.get("template", ""))
|
||||
for step in protocol_steps
|
||||
)
|
||||
|
||||
if has_biomek_template:
|
||||
# 生物实验协议图构建
|
||||
for labware_id, labware in labware_info.items():
|
||||
node_id = str(uuid.uuid4())
|
||||
|
||||
labware_attrs = labware.copy()
|
||||
labware_id = labware_attrs.pop("id", labware_attrs.get("name", f"labware_{uuid.uuid4()}"))
|
||||
labware_attrs["description"] = labware_id
|
||||
labware_attrs["lab_node_type"] = (
|
||||
"Reagent" if "Plate" in str(labware_id) else "Labware" if "Rack" in str(labware_id) else "Sample"
|
||||
)
|
||||
labware_attrs["device_id"] = workstation_name
|
||||
|
||||
G.add_node(node_id, template=f"{LAB_NAME}-host_node-create_resource", **labware_attrs)
|
||||
resource_last_writer[labware_id] = f"{node_id}:labware"
|
||||
|
||||
# 处理协议步骤
|
||||
prev_node = None
|
||||
for i, step in enumerate(protocol_steps):
|
||||
node_id = str(uuid.uuid4())
|
||||
G.add_node(node_id, **step)
|
||||
|
||||
# 添加控制流边
|
||||
if prev_node is not None:
|
||||
G.add_edge(prev_node, node_id, source_port="ready", target_port="ready")
|
||||
prev_node = node_id
|
||||
|
||||
# 处理物料流
|
||||
params = step.get("parameters", {})
|
||||
if "sources" in params and params["sources"] in resource_last_writer:
|
||||
source_node, source_port = resource_last_writer[params["sources"]].split(":")
|
||||
G.add_edge(source_node, node_id, source_port=source_port, target_port="labware")
|
||||
|
||||
if "targets" in params:
|
||||
resource_last_writer[params["targets"]] = f"{node_id}:labware"
|
||||
|
||||
# 添加协议结束节点
|
||||
end_id = str(uuid.uuid4())
|
||||
G.add_node(end_id, template=f"{LAB_NAME}-liquid_handler.biomek-run_protocol")
|
||||
if prev_node is not None:
|
||||
G.add_edge(prev_node, end_id, source_port="ready", target_port="ready")
|
||||
|
||||
else:
|
||||
# 有机化学协议图构建
|
||||
WORKSTATION_ID = workstation_name
|
||||
|
||||
# 为所有labware创建资源节点
|
||||
for item_id, item in labware_info.items():
|
||||
# item_id = item.get("id") or item.get("name", f"item_{uuid.uuid4()}")
|
||||
node_id = str(uuid.uuid4())
|
||||
|
||||
# 判断节点类型
|
||||
if item.get("type") == "hardware" or "reactor" in str(item_id).lower():
|
||||
if "reactor" not in str(item_id).lower():
|
||||
continue
|
||||
lab_node_type = "Sample"
|
||||
description = f"Prepare Reactor: {item_id}"
|
||||
liquid_type = []
|
||||
liquid_volume = []
|
||||
else:
|
||||
lab_node_type = "Reagent"
|
||||
description = f"Add Reagent to Flask: {item_id}"
|
||||
liquid_type = [item_id]
|
||||
liquid_volume = [1e5]
|
||||
|
||||
G.add_node(
|
||||
node_id,
|
||||
template=f"{LAB_NAME}-host_node-create_resource",
|
||||
description=description,
|
||||
lab_node_type=lab_node_type,
|
||||
res_id=item_id,
|
||||
device_id=WORKSTATION_ID,
|
||||
class_name="container",
|
||||
parent=WORKSTATION_ID,
|
||||
bind_locations={"x": 0.0, "y": 0.0, "z": 0.0},
|
||||
liquid_input_slot=[-1],
|
||||
liquid_type=liquid_type,
|
||||
liquid_volume=liquid_volume,
|
||||
slot_on_deck="",
|
||||
role=item.get("role", ""),
|
||||
)
|
||||
resource_last_writer[item_id] = f"{node_id}:labware"
|
||||
|
||||
last_control_node_id = None
|
||||
|
||||
# 处理协议步骤
|
||||
for step in protocol_steps:
|
||||
node_id = str(uuid.uuid4())
|
||||
G.add_node(node_id, **step)
|
||||
|
||||
# 控制流
|
||||
if last_control_node_id is not None:
|
||||
G.add_edge(last_control_node_id, node_id, source_port="ready", target_port="ready")
|
||||
last_control_node_id = node_id
|
||||
|
||||
# 物料流
|
||||
params = step.get("parameters", {})
|
||||
input_resources = {
|
||||
"Vessel": params.get("vessel"),
|
||||
"ToVessel": params.get("to_vessel"),
|
||||
"FromVessel": params.get("from_vessel"),
|
||||
"reagent": params.get("reagent"),
|
||||
"solvent": params.get("solvent"),
|
||||
"compound": params.get("compound"),
|
||||
"sources": params.get("sources"),
|
||||
"targets": params.get("targets"),
|
||||
}
|
||||
|
||||
for target_port, resource_name in input_resources.items():
|
||||
if resource_name and resource_name in resource_last_writer:
|
||||
source_node, source_port = resource_last_writer[resource_name].split(":")
|
||||
G.add_edge(source_node, node_id, source_port=source_port, target_port=target_port)
|
||||
|
||||
output_resources = {
|
||||
"VesselOut": params.get("vessel"),
|
||||
"FromVesselOut": params.get("from_vessel"),
|
||||
"ToVesselOut": params.get("to_vessel"),
|
||||
"FiltrateOut": params.get("filtrate_vessel"),
|
||||
"reagent": params.get("reagent"),
|
||||
"solvent": params.get("solvent"),
|
||||
"compound": params.get("compound"),
|
||||
"sources_out": params.get("sources"),
|
||||
"targets_out": params.get("targets"),
|
||||
}
|
||||
|
||||
for source_port, resource_name in output_resources.items():
|
||||
if resource_name:
|
||||
resource_last_writer[resource_name] = f"{node_id}:{source_port}"
|
||||
|
||||
return G
|
||||
|
||||
|
||||
def draw_protocol_graph(protocol_graph: SimpleGraph, output_path: str):
|
||||
"""
|
||||
(辅助功能) 使用 networkx 和 matplotlib 绘制协议工作流图,用于可视化。
|
||||
"""
|
||||
if not protocol_graph:
|
||||
print("Cannot draw graph: Graph object is empty.")
|
||||
return
|
||||
|
||||
G = nx.DiGraph()
|
||||
|
||||
for node_id, attrs in protocol_graph.nodes.items():
|
||||
label = attrs.get("description", attrs.get("template", node_id[:8]))
|
||||
G.add_node(node_id, label=label, **attrs)
|
||||
|
||||
for edge in protocol_graph.edges:
|
||||
G.add_edge(edge["source"], edge["target"])
|
||||
|
||||
plt.figure(figsize=(20, 15))
|
||||
try:
|
||||
pos = nx.nx_agraph.graphviz_layout(G, prog="dot")
|
||||
except Exception:
|
||||
pos = nx.shell_layout(G) # Fallback layout
|
||||
|
||||
node_labels = {node: data["label"] for node, data in G.nodes(data=True)}
|
||||
nx.draw(
|
||||
G,
|
||||
pos,
|
||||
with_labels=False,
|
||||
node_size=2500,
|
||||
node_color="skyblue",
|
||||
node_shape="o",
|
||||
edge_color="gray",
|
||||
width=1.5,
|
||||
arrowsize=15,
|
||||
)
|
||||
nx.draw_networkx_labels(G, pos, labels=node_labels, font_size=8, font_weight="bold")
|
||||
|
||||
plt.title("Chemical Protocol Workflow Graph", size=15)
|
||||
plt.savefig(output_path, dpi=300, bbox_inches="tight")
|
||||
plt.close()
|
||||
print(f" - Visualization saved to '{output_path}'")
|
||||
|
||||
|
||||
from networkx.drawing.nx_agraph import to_agraph
|
||||
import re
|
||||
|
||||
COMPASS = {"n","e","s","w","ne","nw","se","sw","c"}
|
||||
|
||||
def _is_compass(port: str) -> bool:
|
||||
return isinstance(port, str) and port.lower() in COMPASS
|
||||
|
||||
def draw_protocol_graph_with_ports(protocol_graph, output_path: str, rankdir: str = "LR"):
|
||||
"""
|
||||
使用 Graphviz 端口语法绘制协议工作流图。
|
||||
- 若边上的 source_port/target_port 是 compass(n/e/s/w/...),直接用 compass。
|
||||
- 否则自动为节点创建 record 形状并定义命名端口 <portname>。
|
||||
最终由 PyGraphviz 渲染并输出到 output_path(后缀决定格式,如 .png/.svg/.pdf)。
|
||||
"""
|
||||
if not protocol_graph:
|
||||
print("Cannot draw graph: Graph object is empty.")
|
||||
return
|
||||
|
||||
# 1) 先用 networkx 搭建有向图,保留端口属性
|
||||
G = nx.DiGraph()
|
||||
for node_id, attrs in protocol_graph.nodes.items():
|
||||
label = attrs.get("description", attrs.get("template", node_id[:8]))
|
||||
# 保留一个干净的“中心标签”,用于放在 record 的中间槽
|
||||
G.add_node(node_id, _core_label=str(label), **{k:v for k,v in attrs.items() if k not in ("label",)})
|
||||
|
||||
edges_data = []
|
||||
in_ports_by_node = {} # 收集命名输入端口
|
||||
out_ports_by_node = {} # 收集命名输出端口
|
||||
|
||||
for edge in protocol_graph.edges:
|
||||
u = edge["source"]
|
||||
v = edge["target"]
|
||||
sp = edge.get("source_port")
|
||||
tp = edge.get("target_port")
|
||||
|
||||
# 记录到图里(保留原始端口信息)
|
||||
G.add_edge(u, v, source_port=sp, target_port=tp)
|
||||
edges_data.append((u, v, sp, tp))
|
||||
|
||||
# 如果不是 compass,就按“命名端口”先归类,等会儿给节点造 record
|
||||
if sp and not _is_compass(sp):
|
||||
out_ports_by_node.setdefault(u, set()).add(str(sp))
|
||||
if tp and not _is_compass(tp):
|
||||
in_ports_by_node.setdefault(v, set()).add(str(tp))
|
||||
|
||||
# 2) 转为 AGraph,使用 Graphviz 渲染
|
||||
A = to_agraph(G)
|
||||
A.graph_attr.update(rankdir=rankdir, splines="true", concentrate="false", fontsize="10")
|
||||
A.node_attr.update(shape="box", style="rounded,filled", fillcolor="lightyellow", color="#999999", fontname="Helvetica")
|
||||
A.edge_attr.update(arrowsize="0.8", color="#666666")
|
||||
|
||||
# 3) 为需要命名端口的节点设置 record 形状与 label
|
||||
# 左列 = 输入端口;中间 = 核心标签;右列 = 输出端口
|
||||
for n in A.nodes():
|
||||
node = A.get_node(n)
|
||||
core = G.nodes[n].get("_core_label", n)
|
||||
|
||||
in_ports = sorted(in_ports_by_node.get(n, []))
|
||||
out_ports = sorted(out_ports_by_node.get(n, []))
|
||||
|
||||
# 如果该节点涉及命名端口,则用 record;否则保留原 box
|
||||
if in_ports or out_ports:
|
||||
def port_fields(ports):
|
||||
if not ports:
|
||||
return " " # 必须留一个空槽占位
|
||||
# 每个端口一个小格子,<p> name
|
||||
return "|".join(f"<{re.sub(r'[^A-Za-z0-9_:.|-]', '_', p)}> {p}" for p in ports)
|
||||
|
||||
left = port_fields(in_ports)
|
||||
right = port_fields(out_ports)
|
||||
|
||||
# 三栏:左(入) | 中(节点名) | 右(出)
|
||||
record_label = f"{{ {left} | {core} | {right} }}"
|
||||
node.attr.update(shape="record", label=record_label)
|
||||
else:
|
||||
# 没有命名端口:普通盒子,显示核心标签
|
||||
node.attr.update(label=str(core))
|
||||
|
||||
# 4) 给边设置 headport / tailport
|
||||
# - 若端口为 compass:直接用 compass(e.g., headport="e")
|
||||
# - 若端口为命名端口:使用在 record 中定义的 <port> 名(同名即可)
|
||||
for (u, v, sp, tp) in edges_data:
|
||||
e = A.get_edge(u, v)
|
||||
|
||||
# Graphviz 属性:tail 是源,head 是目标
|
||||
if sp:
|
||||
if _is_compass(sp):
|
||||
e.attr["tailport"] = sp.lower()
|
||||
else:
|
||||
# 与 record label 中 <port> 名一致;特殊字符已在 label 中做了清洗
|
||||
e.attr["tailport"] = re.sub(r'[^A-Za-z0-9_:.|-]', '_', str(sp))
|
||||
|
||||
if tp:
|
||||
if _is_compass(tp):
|
||||
e.attr["headport"] = tp.lower()
|
||||
else:
|
||||
e.attr["headport"] = re.sub(r'[^A-Za-z0-9_:.|-]', '_', str(tp))
|
||||
|
||||
# 可选:若想让边更贴边缘,可设置 constraint/spline 等
|
||||
# e.attr["arrowhead"] = "vee"
|
||||
|
||||
# 5) 输出
|
||||
A.draw(output_path, prog="dot")
|
||||
print(f" - Port-aware workflow rendered to '{output_path}'")
|
||||
|
||||
|
||||
def flatten_xdl_procedure(procedure_elem: ET.Element) -> List[ET.Element]:
|
||||
"""展平嵌套的XDL程序结构"""
|
||||
flattened_operations = []
|
||||
TEMP_UNSUPPORTED_PROTOCOL = ["Purge", "Wait", "Stir", "ResetHandling"]
|
||||
|
||||
def extract_operations(element: ET.Element):
|
||||
if element.tag not in ["Prep", "Reaction", "Workup", "Purification", "Procedure"]:
|
||||
if element.tag not in TEMP_UNSUPPORTED_PROTOCOL:
|
||||
flattened_operations.append(element)
|
||||
|
||||
for child in element:
|
||||
extract_operations(child)
|
||||
|
||||
for child in procedure_elem:
|
||||
extract_operations(child)
|
||||
|
||||
return flattened_operations
|
||||
|
||||
|
||||
def parse_xdl_content(xdl_content: str) -> tuple:
|
||||
"""解析XDL内容"""
|
||||
try:
|
||||
xdl_content_cleaned = "".join(c for c in xdl_content if c.isprintable())
|
||||
root = ET.fromstring(xdl_content_cleaned)
|
||||
|
||||
synthesis_elem = root.find("Synthesis")
|
||||
if synthesis_elem is None:
|
||||
return None, None, None
|
||||
|
||||
# 解析硬件组件
|
||||
hardware_elem = synthesis_elem.find("Hardware")
|
||||
hardware = []
|
||||
if hardware_elem is not None:
|
||||
hardware = [{"id": c.get("id"), "type": c.get("type")} for c in hardware_elem.findall("Component")]
|
||||
|
||||
# 解析试剂
|
||||
reagents_elem = synthesis_elem.find("Reagents")
|
||||
reagents = []
|
||||
if reagents_elem is not None:
|
||||
reagents = [{"name": r.get("name"), "role": r.get("role", "")} for r in reagents_elem.findall("Reagent")]
|
||||
|
||||
# 解析程序
|
||||
procedure_elem = synthesis_elem.find("Procedure")
|
||||
if procedure_elem is None:
|
||||
return None, None, None
|
||||
|
||||
flattened_operations = flatten_xdl_procedure(procedure_elem)
|
||||
return hardware, reagents, flattened_operations
|
||||
|
||||
except ET.ParseError as e:
|
||||
raise ValueError(f"Invalid XDL format: {e}")
|
||||
|
||||
|
||||
def convert_xdl_to_dict(xdl_content: str) -> Dict[str, Any]:
|
||||
"""
|
||||
将XDL XML格式转换为标准的字典格式
|
||||
|
||||
Args:
|
||||
xdl_content: XDL XML内容
|
||||
|
||||
Returns:
|
||||
转换结果,包含步骤和器材信息
|
||||
"""
|
||||
try:
|
||||
hardware, reagents, flattened_operations = parse_xdl_content(xdl_content)
|
||||
if hardware is None:
|
||||
return {"error": "Failed to parse XDL content", "success": False}
|
||||
|
||||
# 将XDL元素转换为字典格式
|
||||
steps_data = []
|
||||
for elem in flattened_operations:
|
||||
# 转换参数类型
|
||||
parameters = {}
|
||||
for key, val in elem.attrib.items():
|
||||
converted_val = convert_to_type(val)
|
||||
if converted_val is not None:
|
||||
parameters[key] = converted_val
|
||||
|
||||
step_dict = {
|
||||
"operation": elem.tag,
|
||||
"parameters": parameters,
|
||||
"description": elem.get("purpose", f"Operation: {elem.tag}"),
|
||||
}
|
||||
steps_data.append(step_dict)
|
||||
|
||||
# 合并硬件和试剂为统一的labware_info格式
|
||||
labware_data = []
|
||||
labware_data.extend({"id": hw["id"], "type": "hardware", **hw} for hw in hardware)
|
||||
labware_data.extend({"name": reagent["name"], "type": "reagent", **reagent} for reagent in reagents)
|
||||
|
||||
return {
|
||||
"success": True,
|
||||
"steps": steps_data,
|
||||
"labware": labware_data,
|
||||
"message": f"Successfully converted XDL to dict format. Found {len(steps_data)} steps and {len(labware_data)} labware items.",
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
error_msg = f"XDL conversion failed: {str(e)}"
|
||||
logger.error(error_msg)
|
||||
return {"error": error_msg, "success": False}
|
||||
|
||||
|
||||
def create_workflow(
|
||||
|
||||
2
setup.py
2
setup.py
@@ -4,7 +4,7 @@ package_name = 'unilabos'
|
||||
|
||||
setup(
|
||||
name=package_name,
|
||||
version='0.10.12',
|
||||
version='0.10.11',
|
||||
packages=find_packages(),
|
||||
include_package_data=True,
|
||||
install_requires=['setuptools'],
|
||||
|
||||
@@ -1 +1 @@
|
||||
__version__ = "0.10.12"
|
||||
__version__ = "0.10.11"
|
||||
|
||||
@@ -141,7 +141,7 @@ class CommunicationClientFactory:
|
||||
"""
|
||||
if cls._client_cache is None:
|
||||
cls._client_cache = cls.create_client(protocol)
|
||||
logger.trace(f"[CommunicationFactory] Created {type(cls._client_cache).__name__} client")
|
||||
logger.info(f"[CommunicationFactory] Created {type(cls._client_cache).__name__} client")
|
||||
|
||||
return cls._client_cache
|
||||
|
||||
|
||||
@@ -49,8 +49,6 @@ def convert_argv_dashes_to_underscores(args: argparse.ArgumentParser):
|
||||
def parse_args():
|
||||
"""解析命令行参数"""
|
||||
parser = argparse.ArgumentParser(description="Start Uni-Lab Edge server.")
|
||||
subparsers = parser.add_subparsers(title="Valid subcommands", dest="command")
|
||||
|
||||
parser.add_argument("-g", "--graph", help="Physical setup graph file path.")
|
||||
parser.add_argument("-c", "--controllers", default=None, help="Controllers config file path.")
|
||||
parser.add_argument(
|
||||
@@ -155,14 +153,6 @@ def parse_args():
|
||||
default=False,
|
||||
help="Complete registry information",
|
||||
)
|
||||
|
||||
# label
|
||||
workflow_parser = subparsers.add_parser(
|
||||
"workflow_upload",
|
||||
help="Upload workflow from xdl/json/python files",
|
||||
)
|
||||
workflow_parser.add_argument("-t", "--labeltype", default="singlepoint", type=str,
|
||||
help="QM calculation type, support 'singlepoint', 'optimize' and 'dimer' currently")
|
||||
return parser
|
||||
|
||||
|
||||
@@ -173,9 +163,6 @@ def main():
|
||||
convert_argv_dashes_to_underscores(args)
|
||||
args_dict = vars(args.parse_args())
|
||||
|
||||
# 显示启动横幅
|
||||
print_unilab_banner(args_dict)
|
||||
|
||||
# 环境检查 - 检查并自动安装必需的包 (可选)
|
||||
if not args_dict.get("skip_env_check", False):
|
||||
from unilabos.utils.environment_check import check_environment
|
||||
@@ -231,7 +218,7 @@ def main():
|
||||
|
||||
if hasattr(BasicConfig, "log_level"):
|
||||
logger.info(f"Log level set to '{BasicConfig.log_level}' from config file.")
|
||||
configure_logger(loglevel=BasicConfig.log_level, working_dir=working_dir)
|
||||
configure_logger(loglevel=BasicConfig.log_level)
|
||||
|
||||
if args_dict["addr"] == "test":
|
||||
print_status("使用测试环境地址", "info")
|
||||
@@ -252,18 +239,7 @@ def main():
|
||||
if args_dict.get("sk", ""):
|
||||
BasicConfig.sk = args_dict.get("sk", "")
|
||||
print_status("传入了sk参数,优先采用传入参数!", "info")
|
||||
BasicConfig.working_dir = working_dir
|
||||
|
||||
# 显示启动横幅
|
||||
print_unilab_banner(args_dict)
|
||||
|
||||
#####################################
|
||||
######## 启动设备接入端(主入口) ########
|
||||
#####################################
|
||||
launch(args_dict)
|
||||
|
||||
|
||||
def launch(args_dict: Dict[str, Any]):
|
||||
# 使用远程资源启动
|
||||
if args_dict["use_remote_resource"]:
|
||||
print_status("使用远程资源启动", "info")
|
||||
@@ -278,6 +254,7 @@ def launch(args_dict: Dict[str, Any]):
|
||||
|
||||
BasicConfig.port = args_dict["port"] if args_dict["port"] else BasicConfig.port
|
||||
BasicConfig.disable_browser = args_dict["disable_browser"] or BasicConfig.disable_browser
|
||||
BasicConfig.working_dir = working_dir
|
||||
BasicConfig.is_host_mode = not args_dict.get("is_slave", False)
|
||||
BasicConfig.slave_no_host = args_dict.get("slave_no_host", False)
|
||||
BasicConfig.upload_registry = args_dict.get("upload_registry", False)
|
||||
@@ -301,6 +278,9 @@ def launch(args_dict: Dict[str, Any]):
|
||||
from unilabos.resources.graphio import modify_to_backend_format
|
||||
from unilabos.ros.nodes.resource_tracker import ResourceTreeSet, ResourceDict
|
||||
|
||||
# 显示启动横幅
|
||||
print_unilab_banner(args_dict)
|
||||
|
||||
# 注册表
|
||||
lab_registry = build_registry(
|
||||
args_dict["registry_path"], args_dict.get("complete_registry", False), args_dict["upload_registry"]
|
||||
@@ -470,13 +450,13 @@ def launch(args_dict: Dict[str, Any]):
|
||||
start_backend(**args_dict)
|
||||
start_server(
|
||||
open_browser=not args_dict["disable_browser"],
|
||||
port=BasicConfig.port,
|
||||
port=args_dict["port"],
|
||||
)
|
||||
else:
|
||||
start_backend(**args_dict)
|
||||
start_server(
|
||||
open_browser=not args_dict["disable_browser"],
|
||||
port=BasicConfig.port,
|
||||
port=args_dict["port"],
|
||||
)
|
||||
|
||||
|
||||
|
||||
@@ -51,25 +51,21 @@ class Resp(BaseModel):
|
||||
class JobAddReq(BaseModel):
|
||||
device_id: str = Field(examples=["Gripper"], description="device id")
|
||||
action: str = Field(examples=["_execute_driver_command_async"], description="action name", default="")
|
||||
action_type: str = Field(
|
||||
examples=["unilabos_msgs.action._str_single_input.StrSingleInput"], description="action type", default=""
|
||||
)
|
||||
action_args: dict = Field(examples=[{"string": "string"}], description="action arguments", default_factory=dict)
|
||||
task_id: str = Field(examples=["task_id"], description="task uuid (auto-generated if empty)", default="")
|
||||
job_id: str = Field(examples=["job_id"], description="goal uuid (auto-generated if empty)", default="")
|
||||
node_id: str = Field(examples=["node_id"], description="node uuid", default="")
|
||||
server_info: dict = Field(
|
||||
examples=[{"send_timestamp": 1717000000.0}],
|
||||
description="server info (auto-generated if empty)",
|
||||
default_factory=dict,
|
||||
)
|
||||
action_type: str = Field(examples=["unilabos_msgs.action._str_single_input.StrSingleInput"], description="action name", default="")
|
||||
action_args: dict = Field(examples=[{'string': 'string'}], description="action name", default="")
|
||||
task_id: str = Field(examples=["task_id"], description="task uuid")
|
||||
job_id: str = Field(examples=["job_id"], description="goal uuid")
|
||||
node_id: str = Field(examples=["node_id"], description="node uuid")
|
||||
server_info: dict = Field(examples=[{"send_timestamp": 1717000000.0}], description="server info")
|
||||
|
||||
data: dict = Field(examples=[{"position": 30, "torque": 5, "action": "push_to"}], default_factory=dict)
|
||||
data: dict = Field(examples=[{"position": 30, "torque": 5, "action": "push_to"}], default={})
|
||||
|
||||
|
||||
class JobStepFinishReq(BaseModel):
|
||||
token: str = Field(examples=["030944"], description="token")
|
||||
request_time: str = Field(examples=["2024-12-12 12:12:12.xxx"], description="requestTime")
|
||||
request_time: str = Field(
|
||||
examples=["2024-12-12 12:12:12.xxx"], description="requestTime"
|
||||
)
|
||||
data: dict = Field(
|
||||
examples=[
|
||||
{
|
||||
@@ -87,7 +83,9 @@ class JobStepFinishReq(BaseModel):
|
||||
|
||||
class JobPreintakeFinishReq(BaseModel):
|
||||
token: str = Field(examples=["030944"], description="token")
|
||||
request_time: str = Field(examples=["2024-12-12 12:12:12.xxx"], description="requestTime")
|
||||
request_time: str = Field(
|
||||
examples=["2024-12-12 12:12:12.xxx"], description="requestTime"
|
||||
)
|
||||
data: dict = Field(
|
||||
examples=[
|
||||
{
|
||||
@@ -104,7 +102,9 @@ class JobPreintakeFinishReq(BaseModel):
|
||||
|
||||
class JobFinishReq(BaseModel):
|
||||
token: str = Field(examples=["030944"], description="token")
|
||||
request_time: str = Field(examples=["2024-12-12 12:12:12.xxx"], description="requestTime")
|
||||
request_time: str = Field(
|
||||
examples=["2024-12-12 12:12:12.xxx"], description="requestTime"
|
||||
)
|
||||
data: dict = Field(
|
||||
examples=[
|
||||
{
|
||||
@@ -133,10 +133,6 @@ class JobData(BaseModel):
|
||||
default=0,
|
||||
description="0:UNKNOWN, 1:ACCEPTED, 2:EXECUTING, 3:CANCELING, 4:SUCCEEDED, 5:CANCELED, 6:ABORTED",
|
||||
)
|
||||
result: dict = Field(
|
||||
default_factory=dict,
|
||||
description="Job result data (available when status is SUCCEEDED/CANCELED/ABORTED)",
|
||||
)
|
||||
|
||||
|
||||
class JobStatusResp(Resp):
|
||||
|
||||
@@ -1,158 +1,161 @@
|
||||
import argparse
|
||||
import os
|
||||
import time
|
||||
from datetime import datetime
|
||||
from pathlib import Path
|
||||
from typing import Dict, Optional, Tuple, Union
|
||||
from typing import Dict, Optional, Tuple
|
||||
|
||||
import requests
|
||||
|
||||
from unilabos.app.web.client import http_client, HTTPClient
|
||||
from unilabos.utils import logger
|
||||
from unilabos.config.config import OSSUploadConfig
|
||||
|
||||
|
||||
def _get_oss_token(
|
||||
filename: str,
|
||||
driver_name: str = "default",
|
||||
exp_type: str = "default",
|
||||
client: Optional[HTTPClient] = None,
|
||||
) -> Tuple[bool, Dict]:
|
||||
def _init_upload(file_path: str, oss_path: str, filename: Optional[str] = None,
|
||||
process_key: str = "file-upload", device_id: str = "default",
|
||||
expires_hours: int = 1) -> Tuple[bool, Dict]:
|
||||
"""
|
||||
获取OSS上传Token
|
||||
初始化上传过程
|
||||
|
||||
Args:
|
||||
filename: 文件名
|
||||
driver_name: 驱动名称
|
||||
exp_type: 实验类型
|
||||
client: HTTPClient实例,如果不提供则使用默认的http_client
|
||||
file_path: 本地文件路径
|
||||
oss_path: OSS目标路径
|
||||
filename: 文件名,如果为None则使用file_path的文件名
|
||||
process_key: 处理键
|
||||
device_id: 设备ID
|
||||
expires_hours: 链接过期小时数
|
||||
|
||||
Returns:
|
||||
(成功标志, Token数据字典包含token/path/host/expires)
|
||||
(成功标志, 响应数据)
|
||||
"""
|
||||
# 使用提供的client或默认的http_client
|
||||
if client is None:
|
||||
client = http_client
|
||||
if filename is None:
|
||||
filename = os.path.basename(file_path)
|
||||
|
||||
# 构造scene参数: driver_name-exp_type
|
||||
sub_path = f"{driver_name}-{exp_type}"
|
||||
# 构造初始化请求
|
||||
url = f"{OSSUploadConfig.api_host}{OSSUploadConfig.init_endpoint}"
|
||||
headers = {
|
||||
"Authorization": OSSUploadConfig.authorization,
|
||||
"Content-Type": "application/json"
|
||||
}
|
||||
|
||||
# 构造请求URL,使用client的remote_addr(已包含/api/v1/)
|
||||
url = f"{client.remote_addr}/applications/token"
|
||||
params = {"sub_path": sub_path, "filename": filename, "scene": "job"}
|
||||
payload = {
|
||||
"device_id": device_id,
|
||||
"process_key": process_key,
|
||||
"filename": filename,
|
||||
"path": oss_path,
|
||||
"expires_hours": expires_hours
|
||||
}
|
||||
|
||||
try:
|
||||
logger.info(f"[OSS] 请求预签名URL: sub_path={sub_path}, filename={filename}")
|
||||
response = requests.get(url, params=params, headers={"Authorization": f"Lab {client.auth}"}, timeout=10)
|
||||
|
||||
if response.status_code == 200:
|
||||
response = requests.post(url, headers=headers, json=payload)
|
||||
if response.status_code == 201:
|
||||
result = response.json()
|
||||
if result.get("code") == 0:
|
||||
data = result.get("data", {})
|
||||
if result.get("code") == "10000":
|
||||
return True, result.get("data", {})
|
||||
|
||||
# 转换expires时间戳为可读格式
|
||||
expires_timestamp = data.get("expires", 0)
|
||||
expires_datetime = datetime.fromtimestamp(expires_timestamp)
|
||||
expires_str = expires_datetime.strftime("%Y-%m-%d %H:%M:%S")
|
||||
|
||||
logger.info(f"[OSS] 获取预签名URL成功")
|
||||
logger.info(f"[OSS] - URL: {data.get('url', 'N/A')}")
|
||||
logger.info(f"[OSS] - Expires: {expires_str} (timestamp: {expires_timestamp})")
|
||||
|
||||
return True, data
|
||||
|
||||
logger.error(f"[OSS] 获取预签名URL失败: {response.status_code}, {response.text}")
|
||||
print(f"初始化上传失败: {response.status_code}, {response.text}")
|
||||
return False, {}
|
||||
except Exception as e:
|
||||
logger.error(f"[OSS] 获取预签名URL异常: {str(e)}")
|
||||
print(f"初始化上传异常: {str(e)}")
|
||||
return False, {}
|
||||
|
||||
|
||||
def _put_upload(file_path: str, upload_url: str) -> bool:
|
||||
"""
|
||||
使用预签名URL上传文件到OSS
|
||||
执行PUT上传
|
||||
|
||||
Args:
|
||||
file_path: 本地文件路径
|
||||
upload_url: 完整的预签名上传URL
|
||||
upload_url: 上传URL
|
||||
|
||||
Returns:
|
||||
是否成功
|
||||
"""
|
||||
try:
|
||||
logger.info(f"[OSS] 开始上传文件: {file_path}")
|
||||
|
||||
with open(file_path, "rb") as f:
|
||||
# 使用预签名URL上传,不需要额外的认证header
|
||||
response = requests.put(upload_url, data=f, timeout=300)
|
||||
|
||||
response = requests.put(upload_url, data=f)
|
||||
if response.status_code == 200:
|
||||
logger.info(f"[OSS] 文件上传成功")
|
||||
return True
|
||||
|
||||
logger.error(f"[OSS] 上传失败: {response.status_code}")
|
||||
logger.error(f"[OSS] 响应内容: {response.text[:500] if response.text else '无响应内容'}")
|
||||
print(f"PUT上传失败: {response.status_code}, {response.text}")
|
||||
return False
|
||||
except Exception as e:
|
||||
logger.error(f"[OSS] 上传异常: {str(e)}")
|
||||
print(f"PUT上传异常: {str(e)}")
|
||||
return False
|
||||
|
||||
|
||||
def oss_upload(
|
||||
file_path: Union[str, Path],
|
||||
filename: Optional[str] = None,
|
||||
driver_name: str = "default",
|
||||
exp_type: str = "default",
|
||||
max_retries: int = 3,
|
||||
client: Optional[HTTPClient] = None,
|
||||
) -> Dict:
|
||||
def _complete_upload(uuid: str) -> bool:
|
||||
"""
|
||||
完成上传过程
|
||||
|
||||
Args:
|
||||
uuid: 上传的UUID
|
||||
|
||||
Returns:
|
||||
是否成功
|
||||
"""
|
||||
url = f"{OSSUploadConfig.api_host}{OSSUploadConfig.complete_endpoint}"
|
||||
headers = {
|
||||
"Authorization": OSSUploadConfig.authorization,
|
||||
"Content-Type": "application/json"
|
||||
}
|
||||
|
||||
payload = {
|
||||
"uuid": uuid
|
||||
}
|
||||
|
||||
try:
|
||||
response = requests.post(url, headers=headers, json=payload)
|
||||
if response.status_code == 200:
|
||||
result = response.json()
|
||||
if result.get("code") == "10000":
|
||||
return True
|
||||
|
||||
print(f"完成上传失败: {response.status_code}, {response.text}")
|
||||
return False
|
||||
except Exception as e:
|
||||
print(f"完成上传异常: {str(e)}")
|
||||
return False
|
||||
|
||||
|
||||
def oss_upload(file_path: str, oss_path: str, filename: Optional[str] = None,
|
||||
process_key: str = "file-upload", device_id: str = "default") -> bool:
|
||||
"""
|
||||
文件上传主函数,包含重试机制
|
||||
|
||||
Args:
|
||||
file_path: 本地文件路径
|
||||
oss_path: OSS目标路径
|
||||
filename: 文件名,如果为None则使用file_path的文件名
|
||||
driver_name: 驱动名称,用于构造scene
|
||||
exp_type: 实验类型,用于构造scene
|
||||
max_retries: 最大重试次数
|
||||
client: HTTPClient实例,如果不提供则使用默认的http_client
|
||||
process_key: 处理键
|
||||
device_id: 设备ID
|
||||
|
||||
Returns:
|
||||
Dict: {
|
||||
"success": bool, # 是否上传成功
|
||||
"original_path": str, # 原始文件路径
|
||||
"oss_path": str # OSS路径(成功时)或空字符串(失败时)
|
||||
}
|
||||
是否成功上传
|
||||
"""
|
||||
file_path = Path(file_path)
|
||||
if filename is None:
|
||||
filename = os.path.basename(file_path)
|
||||
|
||||
if not os.path.exists(file_path):
|
||||
logger.error(f"[OSS] 文件不存在: {file_path}")
|
||||
return {"success": False, "original_path": file_path, "oss_path": ""}
|
||||
|
||||
max_retries = OSSUploadConfig.max_retries
|
||||
retry_count = 0
|
||||
oss_path = ""
|
||||
|
||||
while retry_count < max_retries:
|
||||
try:
|
||||
# 步骤1:获取预签名URL
|
||||
token_success, token_data = _get_oss_token(
|
||||
filename=filename, driver_name=driver_name, exp_type=exp_type, client=client
|
||||
# 步骤1:初始化上传
|
||||
init_success, init_data = _init_upload(
|
||||
file_path=file_path,
|
||||
oss_path=oss_path,
|
||||
filename=filename,
|
||||
process_key=process_key,
|
||||
device_id=device_id
|
||||
)
|
||||
|
||||
if not token_success:
|
||||
logger.warning(f"[OSS] 获取预签名URL失败,重试 {retry_count + 1}/{max_retries}")
|
||||
if not init_success:
|
||||
print(f"初始化上传失败,重试 {retry_count + 1}/{max_retries}")
|
||||
retry_count += 1
|
||||
time.sleep(1)
|
||||
time.sleep(1) # 等待1秒后重试
|
||||
continue
|
||||
|
||||
# 获取预签名URL和OSS路径
|
||||
upload_url = token_data.get("url")
|
||||
oss_path = token_data.get("path", "")
|
||||
# 获取UUID和上传URL
|
||||
uuid = init_data.get("uuid")
|
||||
upload_url = init_data.get("upload_url")
|
||||
|
||||
if not upload_url:
|
||||
logger.warning(f"[OSS] 无法获取上传URL,API未返回url字段")
|
||||
if not uuid or not upload_url:
|
||||
print(f"初始化上传返回数据不完整,重试 {retry_count + 1}/{max_retries}")
|
||||
retry_count += 1
|
||||
time.sleep(1)
|
||||
continue
|
||||
@@ -160,82 +163,69 @@ def oss_upload(
|
||||
# 步骤2:PUT上传文件
|
||||
put_success = _put_upload(file_path, upload_url)
|
||||
if not put_success:
|
||||
logger.warning(f"[OSS] PUT上传失败,重试 {retry_count + 1}/{max_retries}")
|
||||
print(f"PUT上传失败,重试 {retry_count + 1}/{max_retries}")
|
||||
retry_count += 1
|
||||
time.sleep(1)
|
||||
continue
|
||||
|
||||
# 步骤3:完成上传
|
||||
complete_success = _complete_upload(uuid)
|
||||
if not complete_success:
|
||||
print(f"完成上传失败,重试 {retry_count + 1}/{max_retries}")
|
||||
retry_count += 1
|
||||
time.sleep(1)
|
||||
continue
|
||||
|
||||
# 所有步骤都成功
|
||||
logger.info(f"[OSS] 文件 {file_path} 上传成功")
|
||||
return {"success": True, "original_path": file_path, "oss_path": oss_path}
|
||||
print(f"文件 {file_path} 上传成功")
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"[OSS] 上传过程异常: {str(e)},重试 {retry_count + 1}/{max_retries}")
|
||||
print(f"上传过程异常: {str(e)},重试 {retry_count + 1}/{max_retries}")
|
||||
retry_count += 1
|
||||
time.sleep(1)
|
||||
|
||||
logger.error(f"[OSS] 文件 {file_path} 上传失败,已达到最大重试次数 {max_retries}")
|
||||
return {"success": False, "original_path": file_path, "oss_path": oss_path}
|
||||
print(f"文件 {file_path} 上传失败,已达到最大重试次数 {max_retries}")
|
||||
return False
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
# python -m unilabos.app.oss_upload -f /path/to/your/file.txt --driver HPLC --type test
|
||||
# python -m unilabos.app.oss_upload -f /path/to/your/file.txt --driver HPLC --type test \
|
||||
# --ak xxx --sk yyy --remote-addr http://xxx/api/v1
|
||||
# python -m unilabos.app.oss_upload -f /path/to/your/file.txt
|
||||
# 命令行参数解析
|
||||
parser = argparse.ArgumentParser(description="文件上传测试工具")
|
||||
parser.add_argument("--file", "-f", type=str, required=True, help="要上传的本地文件路径")
|
||||
parser.add_argument("--driver", "-d", type=str, default="default", help="驱动名称")
|
||||
parser.add_argument("--type", "-t", type=str, default="default", help="实验类型")
|
||||
parser.add_argument("--ak", type=str, help="Access Key,如果提供则覆盖配置")
|
||||
parser.add_argument("--sk", type=str, help="Secret Key,如果提供则覆盖配置")
|
||||
parser.add_argument("--remote-addr", type=str, help="远程服务器地址(包含/api/v1),如果提供则覆盖配置")
|
||||
parser = argparse.ArgumentParser(description='文件上传测试工具')
|
||||
parser.add_argument('--file', '-f', type=str, required=True, help='要上传的本地文件路径')
|
||||
parser.add_argument('--path', '-p', type=str, default='/HPLC1/Any', help='OSS目标路径')
|
||||
parser.add_argument('--device', '-d', type=str, default='test-device', help='设备ID')
|
||||
parser.add_argument('--process', '-k', type=str, default='HPLC-txt-result', help='处理键')
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
# 检查文件是否存在
|
||||
if not os.path.exists(args.file):
|
||||
logger.error(f"错误:文件 {args.file} 不存在")
|
||||
print(f"错误:文件 {args.file} 不存在")
|
||||
exit(1)
|
||||
|
||||
# 如果提供了ak/sk/remote_addr,创建临时HTTPClient
|
||||
temp_client = None
|
||||
if args.ak and args.sk:
|
||||
import base64
|
||||
|
||||
auth = base64.b64encode(f"{args.ak}:{args.sk}".encode("utf-8")).decode("utf-8")
|
||||
remote_addr = args.remote_addr if args.remote_addr else http_client.remote_addr
|
||||
temp_client = HTTPClient(remote_addr=remote_addr, auth=auth)
|
||||
logger.info(f"[配置] 使用自定义配置: remote_addr={remote_addr}")
|
||||
elif args.remote_addr:
|
||||
temp_client = HTTPClient(remote_addr=args.remote_addr, auth=http_client.auth)
|
||||
logger.info(f"[配置] 使用自定义remote_addr: {args.remote_addr}")
|
||||
else:
|
||||
logger.info(f"[配置] 使用默认配置: remote_addr={http_client.remote_addr}")
|
||||
|
||||
logger.info("=" * 50)
|
||||
logger.info(f"开始上传文件: {args.file}")
|
||||
logger.info(f"驱动名称: {args.driver}")
|
||||
logger.info(f"实验类型: {args.type}")
|
||||
logger.info(f"Scene: {args.driver}-{args.type}")
|
||||
logger.info("=" * 50)
|
||||
print("=" * 50)
|
||||
print(f"开始上传文件: {args.file}")
|
||||
print(f"目标路径: {args.path}")
|
||||
print(f"设备ID: {args.device}")
|
||||
print(f"处理键: {args.process}")
|
||||
print("=" * 50)
|
||||
|
||||
# 执行上传
|
||||
result = oss_upload(
|
||||
success = oss_upload(
|
||||
file_path=args.file,
|
||||
oss_path=args.path,
|
||||
filename=None, # 使用默认文件名
|
||||
driver_name=args.driver,
|
||||
exp_type=args.type,
|
||||
client=temp_client,
|
||||
process_key=args.process,
|
||||
device_id=args.device
|
||||
)
|
||||
|
||||
# 输出结果
|
||||
if result["success"]:
|
||||
logger.info(f"\n√ 文件上传成功!")
|
||||
logger.info(f"原始路径: {result['original_path']}")
|
||||
logger.info(f"OSS路径: {result['oss_path']}")
|
||||
if success:
|
||||
print("\n√ 文件上传成功!")
|
||||
exit(0)
|
||||
else:
|
||||
logger.error(f"\n× 文件上传失败!")
|
||||
logger.error(f"原始路径: {result['original_path']}")
|
||||
print("\n× 文件上传失败!")
|
||||
exit(1)
|
||||
|
||||
|
||||
@@ -9,22 +9,13 @@ import asyncio
|
||||
|
||||
import yaml
|
||||
|
||||
from unilabos.app.web.controller import (
|
||||
devices,
|
||||
job_add,
|
||||
job_info,
|
||||
get_online_devices,
|
||||
get_device_actions,
|
||||
get_action_schema,
|
||||
get_all_available_actions,
|
||||
)
|
||||
from unilabos.app.web.controler import devices, job_add, job_info
|
||||
from unilabos.app.model import (
|
||||
Resp,
|
||||
RespCode,
|
||||
JobStatusResp,
|
||||
JobAddResp,
|
||||
JobAddReq,
|
||||
JobData,
|
||||
)
|
||||
from unilabos.app.web.utils.host_utils import get_host_node_info
|
||||
from unilabos.registry.registry import lab_registry
|
||||
@@ -1243,65 +1234,6 @@ def get_devices():
|
||||
return Resp(data=dict(data))
|
||||
|
||||
|
||||
@api.get("/online-devices", summary="Online devices list", response_model=Resp)
|
||||
def api_get_online_devices():
|
||||
"""获取在线设备列表
|
||||
|
||||
返回当前在线的设备列表,包含设备ID、命名空间、机器名等信息
|
||||
"""
|
||||
isok, data = get_online_devices()
|
||||
if not isok:
|
||||
return Resp(code=RespCode.ErrorHostNotInit, message=data.get("error", "Unknown error"))
|
||||
|
||||
return Resp(data=data)
|
||||
|
||||
|
||||
@api.get("/devices/{device_id}/actions", summary="Device actions list", response_model=Resp)
|
||||
def api_get_device_actions(device_id: str):
|
||||
"""获取设备可用的动作列表
|
||||
|
||||
Args:
|
||||
device_id: 设备ID
|
||||
|
||||
返回指定设备的所有可用动作,包含动作名称、类型、是否繁忙等信息
|
||||
"""
|
||||
isok, data = get_device_actions(device_id)
|
||||
if not isok:
|
||||
return Resp(code=RespCode.ErrorInvalidReq, message=data.get("error", "Unknown error"))
|
||||
|
||||
return Resp(data=data)
|
||||
|
||||
|
||||
@api.get("/devices/{device_id}/actions/{action_name}/schema", summary="Action schema", response_model=Resp)
|
||||
def api_get_action_schema(device_id: str, action_name: str):
|
||||
"""获取动作的Schema详情
|
||||
|
||||
Args:
|
||||
device_id: 设备ID
|
||||
action_name: 动作名称
|
||||
|
||||
返回动作的参数Schema、默认值、类型等详细信息
|
||||
"""
|
||||
isok, data = get_action_schema(device_id, action_name)
|
||||
if not isok:
|
||||
return Resp(code=RespCode.ErrorInvalidReq, message=data.get("error", "Unknown error"))
|
||||
|
||||
return Resp(data=data)
|
||||
|
||||
|
||||
@api.get("/actions", summary="All available actions", response_model=Resp)
|
||||
def api_get_all_actions():
|
||||
"""获取所有设备的可用动作
|
||||
|
||||
返回所有已注册设备的动作列表,包含设备信息和各动作的状态
|
||||
"""
|
||||
isok, data = get_all_available_actions()
|
||||
if not isok:
|
||||
return Resp(code=RespCode.ErrorHostNotInit, message=data.get("error", "Unknown error"))
|
||||
|
||||
return Resp(data=data)
|
||||
|
||||
|
||||
@api.get("/job/{id}/status", summary="Job status", response_model=JobStatusResp)
|
||||
def job_status(id: str):
|
||||
"""获取任务状态"""
|
||||
@@ -1312,22 +1244,11 @@ def job_status(id: str):
|
||||
@api.post("/job/add", summary="Create job", response_model=JobAddResp)
|
||||
def post_job_add(req: JobAddReq):
|
||||
"""创建任务"""
|
||||
# 检查必要参数:device_id 和 action
|
||||
if not req.device_id:
|
||||
return JobAddResp(
|
||||
data=JobData(jobId="", status=6),
|
||||
code=RespCode.ErrorInvalidReq,
|
||||
message="device_id is required",
|
||||
)
|
||||
|
||||
action_name = req.data.get("action", req.action) if req.data else req.action
|
||||
if not action_name:
|
||||
return JobAddResp(
|
||||
data=JobData(jobId="", status=6),
|
||||
code=RespCode.ErrorInvalidReq,
|
||||
message="action is required",
|
||||
)
|
||||
device_id = req.device_id
|
||||
if not req.data:
|
||||
return Resp(code=RespCode.ErrorInvalidReq, message="Invalid request data")
|
||||
|
||||
req.device_id = device_id
|
||||
data = job_add(req)
|
||||
return JobAddResp(data=data)
|
||||
|
||||
|
||||
45
unilabos/app/web/controler.py
Normal file
45
unilabos/app/web/controler.py
Normal file
@@ -0,0 +1,45 @@
|
||||
|
||||
import json
|
||||
import traceback
|
||||
import uuid
|
||||
from unilabos.app.model import JobAddReq, JobData
|
||||
from unilabos.ros.nodes.presets.host_node import HostNode
|
||||
from unilabos.utils.type_check import serialize_result_info
|
||||
|
||||
|
||||
def get_resources() -> tuple:
|
||||
if HostNode.get_instance() is None:
|
||||
return False, "Host node not initialized"
|
||||
|
||||
return True, HostNode.get_instance().resources_config
|
||||
|
||||
def devices() -> tuple:
|
||||
if HostNode.get_instance() is None:
|
||||
return False, "Host node not initialized"
|
||||
|
||||
return True, HostNode.get_instance().devices_config
|
||||
|
||||
def job_info(id: str):
|
||||
get_goal_status = HostNode.get_instance().get_goal_status(id)
|
||||
return JobData(jobId=id, status=get_goal_status)
|
||||
|
||||
def job_add(req: JobAddReq) -> JobData:
|
||||
if req.job_id is None:
|
||||
req.job_id = str(uuid.uuid4())
|
||||
action_name = req.data["action"]
|
||||
action_type = req.data.get("action_type", "LocalUnknown")
|
||||
action_args = req.data.get("action_kwargs", None) # 兼容老版本,后续删除
|
||||
if action_args is None:
|
||||
action_args = req.data.get("action_args")
|
||||
else:
|
||||
if "command" in action_args:
|
||||
action_args = action_args["command"]
|
||||
# print(f"job_add:{req.device_id} {action_name} {action_kwargs}")
|
||||
try:
|
||||
HostNode.get_instance().send_goal(req.device_id, action_type=action_type, action_name=action_name, action_kwargs=action_args, goal_uuid=req.job_id, server_info=req.server_info)
|
||||
except Exception as e:
|
||||
for bridge in HostNode.get_instance().bridges:
|
||||
traceback.print_exc()
|
||||
if hasattr(bridge, "publish_job_status"):
|
||||
bridge.publish_job_status({}, req.job_id, "failed", serialize_result_info(traceback.format_exc(), False, {}))
|
||||
return JobData(jobId=req.job_id)
|
||||
@@ -1,587 +0,0 @@
|
||||
"""
|
||||
Web API Controller
|
||||
|
||||
提供Web API的控制器函数,处理设备、任务和动作相关的业务逻辑
|
||||
"""
|
||||
|
||||
import threading
|
||||
import time
|
||||
import traceback
|
||||
import uuid
|
||||
from dataclasses import dataclass, field
|
||||
from typing import Optional, Dict, Any, Tuple
|
||||
|
||||
from unilabos.app.model import JobAddReq, JobData
|
||||
from unilabos.ros.nodes.presets.host_node import HostNode
|
||||
from unilabos.utils import logger
|
||||
|
||||
|
||||
@dataclass
|
||||
class JobResult:
|
||||
"""任务结果数据"""
|
||||
|
||||
job_id: str
|
||||
status: int # 4:SUCCEEDED, 5:CANCELED, 6:ABORTED
|
||||
result: Dict[str, Any] = field(default_factory=dict)
|
||||
feedback: Dict[str, Any] = field(default_factory=dict)
|
||||
timestamp: float = field(default_factory=time.time)
|
||||
|
||||
|
||||
class JobResultStore:
|
||||
"""任务结果存储(单例)"""
|
||||
|
||||
_instance: Optional["JobResultStore"] = None
|
||||
_lock = threading.Lock()
|
||||
|
||||
def __init__(self):
|
||||
if not hasattr(self, "_initialized"):
|
||||
self._results: Dict[str, JobResult] = {}
|
||||
self._results_lock = threading.RLock()
|
||||
self._initialized = True
|
||||
|
||||
def __new__(cls):
|
||||
if cls._instance is None:
|
||||
with cls._lock:
|
||||
if cls._instance is None:
|
||||
cls._instance = super().__new__(cls)
|
||||
return cls._instance
|
||||
|
||||
def store_result(
|
||||
self, job_id: str, status: int, result: Optional[Dict[str, Any]], feedback: Optional[Dict[str, Any]] = None
|
||||
):
|
||||
"""存储任务结果"""
|
||||
with self._results_lock:
|
||||
self._results[job_id] = JobResult(
|
||||
job_id=job_id,
|
||||
status=status,
|
||||
result=result or {},
|
||||
feedback=feedback or {},
|
||||
timestamp=time.time(),
|
||||
)
|
||||
logger.debug(f"[JobResultStore] Stored result for job {job_id[:8]}, status={status}")
|
||||
|
||||
def get_and_remove(self, job_id: str) -> Optional[JobResult]:
|
||||
"""获取并删除任务结果"""
|
||||
with self._results_lock:
|
||||
result = self._results.pop(job_id, None)
|
||||
if result:
|
||||
logger.debug(f"[JobResultStore] Retrieved and removed result for job {job_id[:8]}")
|
||||
return result
|
||||
|
||||
def get_result(self, job_id: str) -> Optional[JobResult]:
|
||||
"""仅获取任务结果(不删除)"""
|
||||
with self._results_lock:
|
||||
return self._results.get(job_id)
|
||||
|
||||
def cleanup_old_results(self, max_age_seconds: float = 3600):
|
||||
"""清理过期的结果"""
|
||||
current_time = time.time()
|
||||
with self._results_lock:
|
||||
expired_jobs = [
|
||||
job_id for job_id, result in self._results.items() if current_time - result.timestamp > max_age_seconds
|
||||
]
|
||||
for job_id in expired_jobs:
|
||||
del self._results[job_id]
|
||||
logger.debug(f"[JobResultStore] Cleaned up expired result for job {job_id[:8]}")
|
||||
|
||||
|
||||
# 全局结果存储实例
|
||||
job_result_store = JobResultStore()
|
||||
|
||||
|
||||
def store_job_result(
|
||||
job_id: str, status: str, result: Optional[Dict[str, Any]], feedback: Optional[Dict[str, Any]] = None
|
||||
):
|
||||
"""存储任务结果(供外部调用)
|
||||
|
||||
Args:
|
||||
job_id: 任务ID
|
||||
status: 状态字符串 ("success", "failed", "cancelled")
|
||||
result: 结果数据
|
||||
feedback: 反馈数据
|
||||
"""
|
||||
# 转换状态字符串为整数
|
||||
status_map = {
|
||||
"success": 4, # SUCCEEDED
|
||||
"failed": 6, # ABORTED
|
||||
"cancelled": 5, # CANCELED
|
||||
"running": 2, # EXECUTING
|
||||
}
|
||||
status_int = status_map.get(status, 0)
|
||||
|
||||
# 只存储最终状态
|
||||
if status_int in (4, 5, 6):
|
||||
job_result_store.store_result(job_id, status_int, result, feedback)
|
||||
|
||||
|
||||
def get_resources() -> Tuple[bool, Any]:
|
||||
"""获取资源配置
|
||||
|
||||
Returns:
|
||||
Tuple[bool, Any]: (是否成功, 资源配置或错误信息)
|
||||
"""
|
||||
host_node = HostNode.get_instance(0)
|
||||
if host_node is None:
|
||||
return False, "Host node not initialized"
|
||||
|
||||
return True, host_node.resources_config
|
||||
|
||||
|
||||
def devices() -> Tuple[bool, Any]:
|
||||
"""获取设备配置
|
||||
|
||||
Returns:
|
||||
Tuple[bool, Any]: (是否成功, 设备配置或错误信息)
|
||||
"""
|
||||
host_node = HostNode.get_instance(0)
|
||||
if host_node is None:
|
||||
return False, "Host node not initialized"
|
||||
|
||||
return True, host_node.devices_config
|
||||
|
||||
|
||||
def job_info(job_id: str, remove_after_read: bool = True) -> JobData:
|
||||
"""获取任务信息
|
||||
|
||||
Args:
|
||||
job_id: 任务ID
|
||||
remove_after_read: 是否在读取后删除结果(默认True)
|
||||
|
||||
Returns:
|
||||
JobData: 任务数据
|
||||
"""
|
||||
# 首先检查结果存储中是否有已完成的结果
|
||||
if remove_after_read:
|
||||
stored_result = job_result_store.get_and_remove(job_id)
|
||||
else:
|
||||
stored_result = job_result_store.get_result(job_id)
|
||||
|
||||
if stored_result:
|
||||
# 有存储的结果,直接返回
|
||||
return JobData(
|
||||
jobId=job_id,
|
||||
status=stored_result.status,
|
||||
result=stored_result.result,
|
||||
)
|
||||
|
||||
# 没有存储的结果,从 HostNode 获取当前状态
|
||||
host_node = HostNode.get_instance(0)
|
||||
if host_node is None:
|
||||
return JobData(jobId=job_id, status=0)
|
||||
|
||||
get_goal_status = host_node.get_goal_status(job_id)
|
||||
return JobData(jobId=job_id, status=get_goal_status)
|
||||
|
||||
|
||||
def check_device_action_busy(device_id: str, action_name: str) -> Tuple[bool, Optional[str]]:
|
||||
"""检查设备动作是否正在执行(被占用)
|
||||
|
||||
Args:
|
||||
device_id: 设备ID
|
||||
action_name: 动作名称
|
||||
|
||||
Returns:
|
||||
Tuple[bool, Optional[str]]: (是否繁忙, 当前执行的job_id或None)
|
||||
"""
|
||||
host_node = HostNode.get_instance(0)
|
||||
if host_node is None:
|
||||
return False, None
|
||||
|
||||
device_action_key = f"/devices/{device_id}/{action_name}"
|
||||
|
||||
# 检查 _device_action_status 中是否有正在执行的任务
|
||||
if device_action_key in host_node._device_action_status:
|
||||
status = host_node._device_action_status[device_action_key]
|
||||
if status.job_ids:
|
||||
# 返回第一个正在执行的job_id
|
||||
current_job_id = next(iter(status.job_ids.keys()), None)
|
||||
return True, current_job_id
|
||||
|
||||
return False, None
|
||||
|
||||
|
||||
def _get_action_type(device_id: str, action_name: str) -> Optional[str]:
|
||||
"""从注册表自动获取动作类型
|
||||
|
||||
Args:
|
||||
device_id: 设备ID
|
||||
action_name: 动作名称
|
||||
|
||||
Returns:
|
||||
动作类型字符串,未找到返回None
|
||||
"""
|
||||
try:
|
||||
from unilabos.ros.nodes.base_device_node import registered_devices
|
||||
|
||||
# 方法1: 从运行时注册设备获取
|
||||
if device_id in registered_devices:
|
||||
device_info = registered_devices[device_id]
|
||||
base_node = device_info.get("base_node_instance")
|
||||
if base_node and hasattr(base_node, "_action_value_mappings"):
|
||||
action_mappings = base_node._action_value_mappings
|
||||
# 尝试直接匹配或 auto- 前缀匹配
|
||||
for key in [action_name, f"auto-{action_name}"]:
|
||||
if key in action_mappings:
|
||||
action_type = action_mappings[key].get("type")
|
||||
if action_type:
|
||||
# 转换为字符串格式
|
||||
if hasattr(action_type, "__module__") and hasattr(action_type, "__name__"):
|
||||
return f"{action_type.__module__}.{action_type.__name__}"
|
||||
return str(action_type)
|
||||
|
||||
# 方法2: 从lab_registry获取
|
||||
from unilabos.registry.registry import lab_registry
|
||||
|
||||
host_node = HostNode.get_instance(0)
|
||||
if host_node and lab_registry:
|
||||
devices_config = host_node.devices_config
|
||||
device_class = None
|
||||
|
||||
for tree in devices_config.trees:
|
||||
node = tree.root_node
|
||||
if node.res_content.id == device_id:
|
||||
device_class = node.res_content.klass
|
||||
break
|
||||
|
||||
if device_class and device_class in lab_registry.device_type_registry:
|
||||
device_type_info = lab_registry.device_type_registry[device_class]
|
||||
class_info = device_type_info.get("class", {})
|
||||
action_mappings = class_info.get("action_value_mappings", {})
|
||||
|
||||
for key in [action_name, f"auto-{action_name}"]:
|
||||
if key in action_mappings:
|
||||
action_type = action_mappings[key].get("type")
|
||||
if action_type:
|
||||
if hasattr(action_type, "__module__") and hasattr(action_type, "__name__"):
|
||||
return f"{action_type.__module__}.{action_type.__name__}"
|
||||
return str(action_type)
|
||||
|
||||
except Exception as e:
|
||||
logger.warning(f"[Controller] Failed to get action type for {device_id}/{action_name}: {str(e)}")
|
||||
|
||||
return None
|
||||
|
||||
|
||||
def job_add(req: JobAddReq) -> JobData:
|
||||
"""添加任务(检查设备是否繁忙,繁忙则返回失败)
|
||||
|
||||
Args:
|
||||
req: 任务添加请求
|
||||
|
||||
Returns:
|
||||
JobData: 任务数据(包含状态)
|
||||
"""
|
||||
# 服务端自动生成 job_id 和 task_id
|
||||
job_id = str(uuid.uuid4())
|
||||
task_id = str(uuid.uuid4())
|
||||
|
||||
# 服务端自动生成 server_info
|
||||
server_info = {"send_timestamp": time.time()}
|
||||
|
||||
host_node = HostNode.get_instance(0)
|
||||
if host_node is None:
|
||||
logger.error(f"[Controller] Host node not initialized for job: {job_id[:8]}")
|
||||
return JobData(jobId=job_id, status=6) # 6 = ABORTED
|
||||
|
||||
# 解析动作信息
|
||||
action_name = req.data.get("action", req.action) if req.data else req.action
|
||||
action_args = req.data.get("action_kwargs") or req.data.get("action_args") if req.data else req.action_args
|
||||
|
||||
if action_args is None:
|
||||
action_args = req.action_args or {}
|
||||
elif isinstance(action_args, dict) and "command" in action_args:
|
||||
action_args = action_args["command"]
|
||||
|
||||
# 自动获取 action_type
|
||||
action_type = _get_action_type(req.device_id, action_name)
|
||||
if action_type is None:
|
||||
logger.error(f"[Controller] Action type not found for {req.device_id}/{action_name}")
|
||||
return JobData(jobId=job_id, status=6) # ABORTED
|
||||
|
||||
# 检查设备动作是否繁忙
|
||||
is_busy, current_job_id = check_device_action_busy(req.device_id, action_name)
|
||||
|
||||
if is_busy:
|
||||
logger.warning(
|
||||
f"[Controller] Device action busy: {req.device_id}/{action_name}, "
|
||||
f"current job: {current_job_id[:8] if current_job_id else 'unknown'}"
|
||||
)
|
||||
# 返回失败状态,status=6 表示 ABORTED
|
||||
return JobData(jobId=job_id, status=6)
|
||||
|
||||
# 设备空闲,提交任务执行
|
||||
try:
|
||||
from unilabos.app.ws_client import QueueItem
|
||||
|
||||
device_action_key = f"/devices/{req.device_id}/{action_name}"
|
||||
queue_item = QueueItem(
|
||||
task_type="job_call_back_status",
|
||||
device_id=req.device_id,
|
||||
action_name=action_name,
|
||||
task_id=task_id,
|
||||
job_id=job_id,
|
||||
device_action_key=device_action_key,
|
||||
)
|
||||
|
||||
host_node.send_goal(
|
||||
queue_item,
|
||||
action_type=action_type,
|
||||
action_kwargs=action_args,
|
||||
server_info=server_info,
|
||||
)
|
||||
|
||||
logger.info(f"[Controller] Job submitted: {job_id[:8]} -> {req.device_id}/{action_name}")
|
||||
# 返回已接受状态,status=1 表示 ACCEPTED
|
||||
return JobData(jobId=job_id, status=1)
|
||||
|
||||
except ValueError as e:
|
||||
# ActionClient not found 等错误
|
||||
logger.error(f"[Controller] Action not available: {str(e)}")
|
||||
return JobData(jobId=job_id, status=6) # ABORTED
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"[Controller] Error submitting job: {str(e)}")
|
||||
traceback.print_exc()
|
||||
return JobData(jobId=job_id, status=6) # ABORTED
|
||||
|
||||
|
||||
def get_online_devices() -> Tuple[bool, Dict[str, Any]]:
|
||||
"""获取在线设备列表
|
||||
|
||||
Returns:
|
||||
Tuple[bool, Dict]: (是否成功, 在线设备信息)
|
||||
"""
|
||||
host_node = HostNode.get_instance(0)
|
||||
if host_node is None:
|
||||
return False, {"error": "Host node not initialized"}
|
||||
|
||||
try:
|
||||
from unilabos.ros.nodes.base_device_node import registered_devices
|
||||
|
||||
online_devices = {}
|
||||
for device_key in host_node._online_devices:
|
||||
# device_key 格式: "namespace/device_id"
|
||||
parts = device_key.split("/")
|
||||
if len(parts) >= 2:
|
||||
device_id = parts[-1]
|
||||
else:
|
||||
device_id = device_key
|
||||
|
||||
# 获取设备详细信息
|
||||
device_info = registered_devices.get(device_id, {})
|
||||
machine_name = host_node.device_machine_names.get(device_id, "未知")
|
||||
|
||||
online_devices[device_id] = {
|
||||
"device_key": device_key,
|
||||
"namespace": host_node.devices_names.get(device_id, ""),
|
||||
"machine_name": machine_name,
|
||||
"uuid": device_info.get("uuid", "") if device_info else "",
|
||||
"node_name": device_info.get("node_name", "") if device_info else "",
|
||||
}
|
||||
|
||||
return True, {
|
||||
"online_devices": online_devices,
|
||||
"total_count": len(online_devices),
|
||||
"timestamp": time.time(),
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"[Controller] Error getting online devices: {str(e)}")
|
||||
traceback.print_exc()
|
||||
return False, {"error": str(e)}
|
||||
|
||||
|
||||
def get_device_actions(device_id: str) -> Tuple[bool, Dict[str, Any]]:
|
||||
"""获取设备可用的动作列表
|
||||
|
||||
Args:
|
||||
device_id: 设备ID
|
||||
|
||||
Returns:
|
||||
Tuple[bool, Dict]: (是否成功, 动作列表信息)
|
||||
"""
|
||||
host_node = HostNode.get_instance(0)
|
||||
if host_node is None:
|
||||
return False, {"error": "Host node not initialized"}
|
||||
|
||||
try:
|
||||
from unilabos.ros.nodes.base_device_node import registered_devices
|
||||
from unilabos.app.web.utils.action_utils import get_action_info
|
||||
|
||||
# 检查设备是否已注册
|
||||
if device_id not in registered_devices:
|
||||
return False, {"error": f"Device not found: {device_id}"}
|
||||
|
||||
device_info = registered_devices[device_id]
|
||||
actions = device_info.get("actions", {})
|
||||
|
||||
actions_list = {}
|
||||
for action_name, action_server in actions.items():
|
||||
try:
|
||||
action_info = get_action_info(action_server, action_name)
|
||||
# 检查动作是否繁忙
|
||||
is_busy, current_job = check_device_action_busy(device_id, action_name)
|
||||
actions_list[action_name] = {
|
||||
**action_info,
|
||||
"is_busy": is_busy,
|
||||
"current_job_id": current_job[:8] if current_job else None,
|
||||
}
|
||||
except Exception as e:
|
||||
logger.warning(f"[Controller] Error getting action info for {action_name}: {str(e)}")
|
||||
actions_list[action_name] = {
|
||||
"type_name": "unknown",
|
||||
"action_path": f"/devices/{device_id}/{action_name}",
|
||||
"is_busy": False,
|
||||
"error": str(e),
|
||||
}
|
||||
|
||||
return True, {
|
||||
"device_id": device_id,
|
||||
"actions": actions_list,
|
||||
"action_count": len(actions_list),
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"[Controller] Error getting device actions: {str(e)}")
|
||||
traceback.print_exc()
|
||||
return False, {"error": str(e)}
|
||||
|
||||
|
||||
def get_action_schema(device_id: str, action_name: str) -> Tuple[bool, Dict[str, Any]]:
|
||||
"""获取动作的Schema详情
|
||||
|
||||
Args:
|
||||
device_id: 设备ID
|
||||
action_name: 动作名称
|
||||
|
||||
Returns:
|
||||
Tuple[bool, Dict]: (是否成功, Schema信息)
|
||||
"""
|
||||
host_node = HostNode.get_instance(0)
|
||||
if host_node is None:
|
||||
return False, {"error": "Host node not initialized"}
|
||||
|
||||
try:
|
||||
from unilabos.registry.registry import lab_registry
|
||||
from unilabos.ros.nodes.base_device_node import registered_devices
|
||||
|
||||
result = {
|
||||
"device_id": device_id,
|
||||
"action_name": action_name,
|
||||
"schema": None,
|
||||
"goal_default": None,
|
||||
"action_type": None,
|
||||
"is_busy": False,
|
||||
}
|
||||
|
||||
# 检查动作是否繁忙
|
||||
is_busy, current_job = check_device_action_busy(device_id, action_name)
|
||||
result["is_busy"] = is_busy
|
||||
result["current_job_id"] = current_job[:8] if current_job else None
|
||||
|
||||
# 方法1: 从 registered_devices 获取运行时信息
|
||||
if device_id in registered_devices:
|
||||
device_info = registered_devices[device_id]
|
||||
base_node = device_info.get("base_node_instance")
|
||||
|
||||
if base_node and hasattr(base_node, "_action_value_mappings"):
|
||||
action_mappings = base_node._action_value_mappings
|
||||
if action_name in action_mappings:
|
||||
mapping = action_mappings[action_name]
|
||||
result["schema"] = mapping.get("schema")
|
||||
result["goal_default"] = mapping.get("goal_default")
|
||||
result["action_type"] = str(mapping.get("type", ""))
|
||||
|
||||
# 方法2: 从 lab_registry 获取注册表信息(如果运行时没有)
|
||||
if result["schema"] is None and lab_registry:
|
||||
# 尝试查找设备类型
|
||||
devices_config = host_node.devices_config
|
||||
device_class = None
|
||||
|
||||
# 从配置中获取设备类型
|
||||
for tree in devices_config.trees:
|
||||
node = tree.root_node
|
||||
if node.res_content.id == device_id:
|
||||
device_class = node.res_content.klass
|
||||
break
|
||||
|
||||
if device_class and device_class in lab_registry.device_type_registry:
|
||||
device_type_info = lab_registry.device_type_registry[device_class]
|
||||
class_info = device_type_info.get("class", {})
|
||||
action_mappings = class_info.get("action_value_mappings", {})
|
||||
|
||||
# 尝试直接匹配或 auto- 前缀匹配
|
||||
for key in [action_name, f"auto-{action_name}"]:
|
||||
if key in action_mappings:
|
||||
mapping = action_mappings[key]
|
||||
result["schema"] = mapping.get("schema")
|
||||
result["goal_default"] = mapping.get("goal_default")
|
||||
result["action_type"] = str(mapping.get("type", ""))
|
||||
result["handles"] = mapping.get("handles", {})
|
||||
result["placeholder_keys"] = mapping.get("placeholder_keys", {})
|
||||
break
|
||||
|
||||
if result["schema"] is None:
|
||||
return False, {"error": f"Action schema not found: {device_id}/{action_name}"}
|
||||
|
||||
return True, result
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"[Controller] Error getting action schema: {str(e)}")
|
||||
traceback.print_exc()
|
||||
return False, {"error": str(e)}
|
||||
|
||||
|
||||
def get_all_available_actions() -> Tuple[bool, Dict[str, Any]]:
|
||||
"""获取所有设备的可用动作
|
||||
|
||||
Returns:
|
||||
Tuple[bool, Dict]: (是否成功, 所有设备的动作信息)
|
||||
"""
|
||||
host_node = HostNode.get_instance(0)
|
||||
if host_node is None:
|
||||
return False, {"error": "Host node not initialized"}
|
||||
|
||||
try:
|
||||
from unilabos.ros.nodes.base_device_node import registered_devices
|
||||
from unilabos.app.web.utils.action_utils import get_action_info
|
||||
|
||||
all_actions = {}
|
||||
total_action_count = 0
|
||||
|
||||
for device_id, device_info in registered_devices.items():
|
||||
actions = device_info.get("actions", {})
|
||||
device_actions = {}
|
||||
|
||||
for action_name, action_server in actions.items():
|
||||
try:
|
||||
action_info = get_action_info(action_server, action_name)
|
||||
is_busy, current_job = check_device_action_busy(device_id, action_name)
|
||||
device_actions[action_name] = {
|
||||
"type_name": action_info.get("type_name", ""),
|
||||
"action_path": action_info.get("action_path", ""),
|
||||
"is_busy": is_busy,
|
||||
"current_job_id": current_job[:8] if current_job else None,
|
||||
}
|
||||
total_action_count += 1
|
||||
except Exception as e:
|
||||
logger.warning(f"[Controller] Error processing action {device_id}/{action_name}: {str(e)}")
|
||||
|
||||
if device_actions:
|
||||
all_actions[device_id] = {
|
||||
"actions": device_actions,
|
||||
"action_count": len(device_actions),
|
||||
"machine_name": host_node.device_machine_names.get(device_id, "未知"),
|
||||
}
|
||||
|
||||
return True, {
|
||||
"devices": all_actions,
|
||||
"device_count": len(all_actions),
|
||||
"total_action_count": total_action_count,
|
||||
"timestamp": time.time(),
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"[Controller] Error getting all available actions: {str(e)}")
|
||||
traceback.print_exc()
|
||||
return False, {"error": str(e)}
|
||||
@@ -359,7 +359,6 @@ class MessageProcessor:
|
||||
self.device_manager = device_manager
|
||||
self.queue_processor = None # 延迟设置
|
||||
self.websocket_client = None # 延迟设置
|
||||
self.session_id = ""
|
||||
|
||||
# WebSocket连接
|
||||
self.websocket = None
|
||||
@@ -389,7 +388,7 @@ class MessageProcessor:
|
||||
self.is_running = True
|
||||
self.thread = threading.Thread(target=self._run, daemon=True, name="MessageProcessor")
|
||||
self.thread.start()
|
||||
logger.trace("[MessageProcessor] Started")
|
||||
logger.info("[MessageProcessor] Started")
|
||||
|
||||
def stop(self) -> None:
|
||||
"""停止消息处理线程"""
|
||||
@@ -428,10 +427,7 @@ class MessageProcessor:
|
||||
ssl=ssl_context,
|
||||
ping_interval=WSConfig.ping_interval,
|
||||
ping_timeout=10,
|
||||
additional_headers={
|
||||
"Authorization": f"Lab {BasicConfig.auth_secret()}",
|
||||
"EdgeSession": f"{self.session_id}",
|
||||
},
|
||||
additional_headers={"Authorization": f"Lab {BasicConfig.auth_secret()}"},
|
||||
logger=ws_logger,
|
||||
) as websocket:
|
||||
self.websocket = websocket
|
||||
@@ -576,9 +572,6 @@ class MessageProcessor:
|
||||
await self._handle_resource_tree_update(message_data, "update")
|
||||
elif message_type == "remove_material":
|
||||
await self._handle_resource_tree_update(message_data, "remove")
|
||||
elif message_type == "session_id":
|
||||
self.session_id = message_data.get("session_id")
|
||||
logger.info(f"[MessageProcessor] Session ID: {self.session_id}")
|
||||
else:
|
||||
logger.debug(f"[MessageProcessor] Unknown message type: {message_type}")
|
||||
|
||||
@@ -939,7 +932,7 @@ class QueueProcessor:
|
||||
# 事件通知机制
|
||||
self.queue_update_event = threading.Event()
|
||||
|
||||
logger.trace("[QueueProcessor] Initialized")
|
||||
logger.info("[QueueProcessor] Initialized")
|
||||
|
||||
def set_websocket_client(self, websocket_client: "WebSocketClient"):
|
||||
"""设置WebSocket客户端引用"""
|
||||
@@ -954,7 +947,7 @@ class QueueProcessor:
|
||||
self.is_running = True
|
||||
self.thread = threading.Thread(target=self._run, daemon=True, name="QueueProcessor")
|
||||
self.thread.start()
|
||||
logger.trace("[QueueProcessor] Started")
|
||||
logger.info("[QueueProcessor] Started")
|
||||
|
||||
def stop(self) -> None:
|
||||
"""停止队列处理线程"""
|
||||
@@ -1203,18 +1196,6 @@ class WebSocketClient(BaseCommunicationClient):
|
||||
|
||||
logger.info("[WebSocketClient] Stopping connection")
|
||||
|
||||
# 发送 normal_exit 消息
|
||||
if self.is_connected():
|
||||
try:
|
||||
session_id = self.message_processor.session_id
|
||||
message = {"action": "normal_exit", "data": {"session_id": session_id}}
|
||||
self.message_processor.send_message(message)
|
||||
logger.info(f"[WebSocketClient] Sent normal_exit message with session_id: {session_id}")
|
||||
# 给一点时间让消息发送出去
|
||||
time.sleep(1)
|
||||
except Exception as e:
|
||||
logger.warning(f"[WebSocketClient] Failed to send normal_exit message: {str(e)}")
|
||||
|
||||
# 停止两个核心线程
|
||||
self.message_processor.stop()
|
||||
self.queue_processor.stop()
|
||||
@@ -1314,19 +1295,3 @@ class WebSocketClient(BaseCommunicationClient):
|
||||
logger.info(f"[WebSocketClient] Job {job_log} cancelled successfully")
|
||||
else:
|
||||
logger.warning(f"[WebSocketClient] Failed to cancel job {job_log}")
|
||||
|
||||
def publish_host_ready(self) -> None:
|
||||
"""发布host_node ready信号"""
|
||||
if self.is_disabled or not self.is_connected():
|
||||
logger.debug("[WebSocketClient] Not connected, cannot publish host ready signal")
|
||||
return
|
||||
|
||||
message = {
|
||||
"action": "host_node_ready",
|
||||
"data": {
|
||||
"status": "ready",
|
||||
"timestamp": time.time(),
|
||||
},
|
||||
}
|
||||
self.message_processor.send_message(message)
|
||||
logger.info("[WebSocketClient] Host node ready signal published")
|
||||
|
||||
@@ -39,6 +39,15 @@ class WSConfig:
|
||||
ping_interval = 30 # ping间隔(秒)
|
||||
|
||||
|
||||
# OSS上传配置
|
||||
class OSSUploadConfig:
|
||||
api_host = ""
|
||||
authorization = ""
|
||||
init_endpoint = ""
|
||||
complete_endpoint = ""
|
||||
max_retries = 3
|
||||
|
||||
|
||||
# HTTP配置
|
||||
class HTTPConfig:
|
||||
remote_addr = "http://127.0.0.1:48197/api/v1"
|
||||
|
||||
@@ -405,19 +405,9 @@ class RunningResultChecker(DriverChecker):
|
||||
for i in range(self.driver._finished, temp):
|
||||
sample_id = self.driver._get_resource_sample_id(self.driver._wf_name, i) # 从0开始计数
|
||||
pdf, txt = self.driver.get_data_file(i + 1)
|
||||
# 使用新的OSS上传接口,传入driver_name和exp_type
|
||||
pdf_result = oss_upload(pdf, filename=os.path.basename(pdf), driver_name="HPLC", exp_type="analysis")
|
||||
txt_result = oss_upload(txt, filename=os.path.basename(txt), driver_name="HPLC", exp_type="result")
|
||||
|
||||
if pdf_result["success"]:
|
||||
print(f"PDF上传成功: {pdf_result['oss_path']}")
|
||||
else:
|
||||
print(f"PDF上传失败: {pdf_result['original_path']}")
|
||||
|
||||
if txt_result["success"]:
|
||||
print(f"TXT上传成功: {txt_result['oss_path']}")
|
||||
else:
|
||||
print(f"TXT上传失败: {txt_result['original_path']}")
|
||||
device_id = self.driver.device_id if hasattr(self.driver, "device_id") else "default"
|
||||
oss_upload(pdf, f"hplc/{sample_id}/{os.path.basename(pdf)}", process_key="example", device_id=device_id)
|
||||
oss_upload(txt, f"hplc/{sample_id}/{os.path.basename(txt)}", process_key="HPLC-txt-result", device_id=device_id)
|
||||
# self.driver.extract_data_from_txt()
|
||||
except Exception as ex:
|
||||
self.driver._finished = 0
|
||||
@@ -466,12 +456,8 @@ if __name__ == "__main__":
|
||||
}
|
||||
sample_id = obj._get_resource_sample_id("test", 0)
|
||||
pdf, txt = obj.get_data_file("1", after_time=datetime(2024, 11, 6, 19, 3, 6))
|
||||
# 使用新的OSS上传接口,传入driver_name和exp_type
|
||||
pdf_result = oss_upload(pdf, filename=os.path.basename(pdf), driver_name="HPLC", exp_type="analysis")
|
||||
txt_result = oss_upload(txt, filename=os.path.basename(txt), driver_name="HPLC", exp_type="result")
|
||||
|
||||
print(f"PDF上传结果: {pdf_result}")
|
||||
print(f"TXT上传结果: {txt_result}")
|
||||
oss_upload(pdf, f"hplc/{sample_id}/{os.path.basename(pdf)}", process_key="example")
|
||||
oss_upload(txt, f"hplc/{sample_id}/{os.path.basename(txt)}", process_key="HPLC-txt-result")
|
||||
# driver = HPLCDriver()
|
||||
# for i in range(10000):
|
||||
# print({k: v for k, v in driver._device_status.items() if isinstance(v, str)})
|
||||
|
||||
@@ -147,9 +147,6 @@ class LiquidHandlerMiddleware(LiquidHandler):
|
||||
offsets: Optional[List[Coordinate]] = None,
|
||||
**backend_kwargs,
|
||||
):
|
||||
# 如果 use_channels 为 None,使用默认值(所有通道)
|
||||
if use_channels is None:
|
||||
use_channels = list(range(self.channel_num))
|
||||
if not offsets or (isinstance(offsets, list) and len(offsets) != len(use_channels)):
|
||||
offsets = [Coordinate.zero()] * len(use_channels)
|
||||
if self._simulator:
|
||||
@@ -762,7 +759,7 @@ class LiquidHandlerAbstract(LiquidHandlerMiddleware):
|
||||
blow_out_air_volume=current_dis_blow_out_air_volume,
|
||||
spread=spread,
|
||||
)
|
||||
if delays is not None and len(delays) > 1:
|
||||
if delays is not None:
|
||||
await self.custom_delay(seconds=delays[1])
|
||||
await self.touch_tip(current_targets)
|
||||
await self.discard_tips()
|
||||
@@ -836,19 +833,17 @@ class LiquidHandlerAbstract(LiquidHandlerMiddleware):
|
||||
spread=spread,
|
||||
)
|
||||
|
||||
if delays is not None and len(delays) > 1:
|
||||
if delays is not None:
|
||||
await self.custom_delay(seconds=delays[1])
|
||||
# 只有在 mix_time 有效时才调用 mix
|
||||
if mix_time is not None and mix_time > 0:
|
||||
await self.mix(
|
||||
targets=[targets[_]],
|
||||
mix_time=mix_time,
|
||||
mix_vol=mix_vol,
|
||||
offsets=offsets if offsets else None,
|
||||
height_to_bottom=mix_liquid_height if mix_liquid_height else None,
|
||||
mix_rate=mix_rate if mix_rate else None,
|
||||
)
|
||||
if delays is not None and len(delays) > 1:
|
||||
await self.mix(
|
||||
targets=[targets[_]],
|
||||
mix_time=mix_time,
|
||||
mix_vol=mix_vol,
|
||||
offsets=offsets if offsets else None,
|
||||
height_to_bottom=mix_liquid_height if mix_liquid_height else None,
|
||||
mix_rate=mix_rate if mix_rate else None,
|
||||
)
|
||||
if delays is not None:
|
||||
await self.custom_delay(seconds=delays[1])
|
||||
await self.touch_tip(targets[_])
|
||||
await self.discard_tips()
|
||||
@@ -898,20 +893,18 @@ class LiquidHandlerAbstract(LiquidHandlerMiddleware):
|
||||
blow_out_air_volume=current_dis_blow_out_air_volume,
|
||||
spread=spread,
|
||||
)
|
||||
if delays is not None and len(delays) > 1:
|
||||
if delays is not None:
|
||||
await self.custom_delay(seconds=delays[1])
|
||||
|
||||
# 只有在 mix_time 有效时才调用 mix
|
||||
if mix_time is not None and mix_time > 0:
|
||||
await self.mix(
|
||||
targets=current_targets,
|
||||
mix_time=mix_time,
|
||||
mix_vol=mix_vol,
|
||||
offsets=offsets if offsets else None,
|
||||
height_to_bottom=mix_liquid_height if mix_liquid_height else None,
|
||||
mix_rate=mix_rate if mix_rate else None,
|
||||
)
|
||||
if delays is not None and len(delays) > 1:
|
||||
await self.mix(
|
||||
targets=current_targets,
|
||||
mix_time=mix_time,
|
||||
mix_vol=mix_vol,
|
||||
offsets=offsets if offsets else None,
|
||||
height_to_bottom=mix_liquid_height if mix_liquid_height else None,
|
||||
mix_rate=mix_rate if mix_rate else None,
|
||||
)
|
||||
if delays is not None:
|
||||
await self.custom_delay(seconds=delays[1])
|
||||
await self.touch_tip(current_targets)
|
||||
await self.discard_tips()
|
||||
@@ -949,146 +942,60 @@ class LiquidHandlerAbstract(LiquidHandlerMiddleware):
|
||||
delays: Optional[List[int]] = None,
|
||||
none_keys: List[str] = [],
|
||||
):
|
||||
"""Transfer liquid with automatic mode detection.
|
||||
|
||||
Supports three transfer modes:
|
||||
1. One-to-many (1 source -> N targets): Distribute from one source to multiple targets
|
||||
2. One-to-one (N sources -> N targets): Standard transfer, each source to corresponding target
|
||||
3. Many-to-one (N sources -> 1 target): Combine multiple sources into one target
|
||||
"""Transfer liquid from each *source* well/plate to the corresponding *target*.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
asp_vols, dis_vols
|
||||
Single volume (µL) or list. Automatically expanded based on transfer mode.
|
||||
Single volume (µL) or list matching the number of transfers.
|
||||
sources, targets
|
||||
Containers (wells or plates). Length determines transfer mode:
|
||||
- len(sources) == 1, len(targets) > 1: One-to-many mode
|
||||
- len(sources) == len(targets): One-to-one mode
|
||||
- len(sources) > 1, len(targets) == 1: Many-to-one mode
|
||||
Same‑length sequences of containers (wells or plates). In 96‑well mode
|
||||
each must contain exactly one plate.
|
||||
tip_racks
|
||||
One or more TipRacks providing fresh tips.
|
||||
is_96_well
|
||||
Set *True* to use the 96‑channel head.
|
||||
"""
|
||||
|
||||
# 确保 use_channels 有默认值
|
||||
if use_channels is None:
|
||||
use_channels = [0] if self.channel_num >= 1 else list(range(self.channel_num))
|
||||
|
||||
|
||||
|
||||
if is_96_well:
|
||||
pass # This mode is not verified.
|
||||
else:
|
||||
# 转换体积参数为列表
|
||||
if isinstance(asp_vols, (int, float)):
|
||||
asp_vols = [float(asp_vols)]
|
||||
else:
|
||||
asp_vols = [float(v) for v in asp_vols]
|
||||
|
||||
if isinstance(dis_vols, (int, float)):
|
||||
dis_vols = [float(dis_vols)]
|
||||
else:
|
||||
dis_vols = [float(v) for v in dis_vols]
|
||||
|
||||
# 识别传输模式
|
||||
num_sources = len(sources)
|
||||
num_targets = len(targets)
|
||||
|
||||
if num_sources == 1 and num_targets > 1:
|
||||
# 模式1: 一对多 (1 source -> N targets)
|
||||
await self._transfer_one_to_many(
|
||||
sources[0], targets, tip_racks, use_channels,
|
||||
asp_vols, dis_vols, asp_flow_rates, dis_flow_rates,
|
||||
offsets, touch_tip, liquid_height, blow_out_air_volume,
|
||||
spread, mix_stage, mix_times, mix_vol, mix_rate,
|
||||
mix_liquid_height, delays
|
||||
)
|
||||
elif num_sources > 1 and num_targets == 1:
|
||||
# 模式2: 多对一 (N sources -> 1 target)
|
||||
await self._transfer_many_to_one(
|
||||
sources, targets[0], tip_racks, use_channels,
|
||||
asp_vols, dis_vols, asp_flow_rates, dis_flow_rates,
|
||||
offsets, touch_tip, liquid_height, blow_out_air_volume,
|
||||
spread, mix_stage, mix_times, mix_vol, mix_rate,
|
||||
mix_liquid_height, delays
|
||||
)
|
||||
elif num_sources == num_targets:
|
||||
# 模式3: 一对一 (N sources -> N targets) - 原有逻辑
|
||||
await self._transfer_one_to_one(
|
||||
sources, targets, tip_racks, use_channels,
|
||||
asp_vols, dis_vols, asp_flow_rates, dis_flow_rates,
|
||||
offsets, touch_tip, liquid_height, blow_out_air_volume,
|
||||
spread, mix_stage, mix_times, mix_vol, mix_rate,
|
||||
mix_liquid_height, delays
|
||||
)
|
||||
else:
|
||||
raise ValueError(
|
||||
f"Unsupported transfer mode: {num_sources} sources -> {num_targets} targets. "
|
||||
"Supported modes: 1->N, N->1, or N->N."
|
||||
)
|
||||
if len(asp_vols) != len(targets):
|
||||
raise ValueError(f"Length of `asp_vols` {len(asp_vols)} must match `targets` {len(targets)}.")
|
||||
|
||||
async def _transfer_one_to_one(
|
||||
self,
|
||||
sources: Sequence[Container],
|
||||
targets: Sequence[Container],
|
||||
tip_racks: Sequence[TipRack],
|
||||
use_channels: List[int],
|
||||
asp_vols: List[float],
|
||||
dis_vols: List[float],
|
||||
asp_flow_rates: Optional[List[Optional[float]]],
|
||||
dis_flow_rates: Optional[List[Optional[float]]],
|
||||
offsets: Optional[List[Coordinate]],
|
||||
touch_tip: bool,
|
||||
liquid_height: Optional[List[Optional[float]]],
|
||||
blow_out_air_volume: Optional[List[Optional[float]]],
|
||||
spread: Literal["wide", "tight", "custom"],
|
||||
mix_stage: Optional[Literal["none", "before", "after", "both"]],
|
||||
mix_times: Optional[int],
|
||||
mix_vol: Optional[int],
|
||||
mix_rate: Optional[int],
|
||||
mix_liquid_height: Optional[float],
|
||||
delays: Optional[List[int]],
|
||||
):
|
||||
"""一对一传输模式:N sources -> N targets"""
|
||||
# 验证参数长度
|
||||
if len(asp_vols) != len(targets):
|
||||
raise ValueError(f"Length of `asp_vols` {len(asp_vols)} must match `targets` {len(targets)}.")
|
||||
if len(dis_vols) != len(targets):
|
||||
raise ValueError(f"Length of `dis_vols` {len(dis_vols)} must match `targets` {len(targets)}.")
|
||||
if len(sources) != len(targets):
|
||||
raise ValueError(f"Length of `sources` {len(sources)} must match `targets` {len(targets)}.")
|
||||
# 首先应该对任务分组,然后每次1个/8个进行操作处理
|
||||
if len(use_channels) == 1:
|
||||
for _ in range(len(targets)):
|
||||
tip = []
|
||||
for ___ in range(len(use_channels)):
|
||||
tip.extend(next(self.current_tip))
|
||||
await self.pick_up_tips(tip)
|
||||
|
||||
if len(use_channels) == 1:
|
||||
for _ in range(len(targets)):
|
||||
tip = []
|
||||
for ___ in range(len(use_channels)):
|
||||
tip.extend(next(self.current_tip))
|
||||
await self.pick_up_tips(tip)
|
||||
|
||||
await self.aspirate(
|
||||
resources=[sources[_]],
|
||||
vols=[asp_vols[_]],
|
||||
use_channels=use_channels,
|
||||
flow_rates=[asp_flow_rates[_]] if asp_flow_rates and len(asp_flow_rates) > _ else None,
|
||||
offsets=[offsets[_]] if offsets and len(offsets) > _ else None,
|
||||
liquid_height=[liquid_height[_]] if liquid_height and len(liquid_height) > _ else None,
|
||||
blow_out_air_volume=[blow_out_air_volume[_]] if blow_out_air_volume and len(blow_out_air_volume) > _ else None,
|
||||
spread=spread,
|
||||
)
|
||||
if delays is not None:
|
||||
await self.custom_delay(seconds=delays[0])
|
||||
await self.dispense(
|
||||
resources=[targets[_]],
|
||||
vols=[dis_vols[_]],
|
||||
use_channels=use_channels,
|
||||
flow_rates=[dis_flow_rates[_]] if dis_flow_rates and len(dis_flow_rates) > _ else None,
|
||||
offsets=[offsets[_]] if offsets and len(offsets) > _ else None,
|
||||
blow_out_air_volume=[blow_out_air_volume[_]] if blow_out_air_volume and len(blow_out_air_volume) > _ else None,
|
||||
liquid_height=[liquid_height[_]] if liquid_height and len(liquid_height) > _ else None,
|
||||
spread=spread,
|
||||
)
|
||||
if delays is not None and len(delays) > 1:
|
||||
await self.custom_delay(seconds=delays[1])
|
||||
if mix_stage in ["after", "both"] and mix_times is not None and mix_times > 0:
|
||||
await self.aspirate(
|
||||
resources=[sources[_]],
|
||||
vols=[asp_vols[_]],
|
||||
use_channels=use_channels,
|
||||
flow_rates=[asp_flow_rates[0]] if asp_flow_rates else None,
|
||||
offsets=[offsets[0]] if offsets else None,
|
||||
liquid_height=[liquid_height[0]] if liquid_height else None,
|
||||
blow_out_air_volume=[blow_out_air_volume[0]] if blow_out_air_volume else None,
|
||||
spread=spread,
|
||||
)
|
||||
if delays is not None:
|
||||
await self.custom_delay(seconds=delays[0])
|
||||
await self.dispense(
|
||||
resources=[targets[_]],
|
||||
vols=[dis_vols[_]],
|
||||
use_channels=use_channels,
|
||||
flow_rates=[dis_flow_rates[1]] if dis_flow_rates else None,
|
||||
offsets=[offsets[1]] if offsets else None,
|
||||
blow_out_air_volume=[blow_out_air_volume[1]] if blow_out_air_volume else None,
|
||||
liquid_height=[liquid_height[1]] if liquid_height else None,
|
||||
spread=spread,
|
||||
)
|
||||
if delays is not None:
|
||||
await self.custom_delay(seconds=delays[1])
|
||||
await self.mix(
|
||||
targets=[targets[_]],
|
||||
mix_time=mix_times,
|
||||
@@ -1097,425 +1004,75 @@ class LiquidHandlerAbstract(LiquidHandlerMiddleware):
|
||||
height_to_bottom=mix_liquid_height if mix_liquid_height else None,
|
||||
mix_rate=mix_rate if mix_rate else None,
|
||||
)
|
||||
if delays is not None and len(delays) > 1:
|
||||
await self.custom_delay(seconds=delays[1])
|
||||
await self.touch_tip(targets[_])
|
||||
await self.discard_tips(use_channels=use_channels)
|
||||
if delays is not None:
|
||||
await self.custom_delay(seconds=delays[1])
|
||||
await self.touch_tip(targets[_])
|
||||
await self.discard_tips()
|
||||
|
||||
elif len(use_channels) == 8:
|
||||
if len(targets) % 8 != 0:
|
||||
raise ValueError(f"Length of `targets` {len(targets)} must be a multiple of 8 for 8-channel mode.")
|
||||
elif len(use_channels) == 8:
|
||||
# 对于8个的情况,需要判断此时任务是不是能被8通道移液站来成功处理
|
||||
if len(targets) % 8 != 0:
|
||||
raise ValueError(f"Length of `targets` {len(targets)} must be a multiple of 8 for 8-channel mode.")
|
||||
|
||||
for i in range(0, len(targets), 8):
|
||||
tip = []
|
||||
for _ in range(len(use_channels)):
|
||||
tip.extend(next(self.current_tip))
|
||||
await self.pick_up_tips(tip)
|
||||
current_targets = targets[i:i + 8]
|
||||
current_reagent_sources = sources[i:i + 8]
|
||||
current_asp_vols = asp_vols[i:i + 8]
|
||||
current_dis_vols = dis_vols[i:i + 8]
|
||||
current_asp_flow_rates = asp_flow_rates[i:i + 8] if asp_flow_rates else None
|
||||
current_asp_offset = offsets[i:i + 8] if offsets else [None] * 8
|
||||
current_dis_offset = offsets[i:i + 8] if offsets else [None] * 8
|
||||
current_asp_liquid_height = liquid_height[i:i + 8] if liquid_height else [None] * 8
|
||||
current_dis_liquid_height = liquid_height[i:i + 8] if liquid_height else [None] * 8
|
||||
current_asp_blow_out_air_volume = blow_out_air_volume[i:i + 8] if blow_out_air_volume else [None] * 8
|
||||
current_dis_blow_out_air_volume = blow_out_air_volume[i:i + 8] if blow_out_air_volume else [None] * 8
|
||||
current_dis_flow_rates = dis_flow_rates[i:i + 8] if dis_flow_rates else None
|
||||
# 8个8个来取任务序列
|
||||
|
||||
await self.aspirate(
|
||||
resources=current_reagent_sources,
|
||||
vols=current_asp_vols,
|
||||
use_channels=use_channels,
|
||||
flow_rates=current_asp_flow_rates,
|
||||
offsets=current_asp_offset,
|
||||
blow_out_air_volume=current_asp_blow_out_air_volume,
|
||||
liquid_height=current_asp_liquid_height,
|
||||
spread=spread,
|
||||
)
|
||||
|
||||
if delays is not None:
|
||||
await self.custom_delay(seconds=delays[0])
|
||||
await self.dispense(
|
||||
resources=current_targets,
|
||||
vols=current_dis_vols,
|
||||
use_channels=use_channels,
|
||||
flow_rates=current_dis_flow_rates,
|
||||
offsets=current_dis_offset,
|
||||
blow_out_air_volume=current_dis_blow_out_air_volume,
|
||||
liquid_height=current_dis_liquid_height,
|
||||
spread=spread,
|
||||
)
|
||||
if delays is not None and len(delays) > 1:
|
||||
await self.custom_delay(seconds=delays[1])
|
||||
|
||||
if mix_stage in ["after", "both"] and mix_times is not None and mix_times > 0:
|
||||
await self.mix(
|
||||
targets=current_targets,
|
||||
mix_time=mix_times,
|
||||
mix_vol=mix_vol,
|
||||
offsets=offsets if offsets else None,
|
||||
height_to_bottom=mix_liquid_height if mix_liquid_height else None,
|
||||
mix_rate=mix_rate if mix_rate else None,
|
||||
)
|
||||
if delays is not None and len(delays) > 1:
|
||||
await self.custom_delay(seconds=delays[1])
|
||||
await self.touch_tip(current_targets)
|
||||
await self.discard_tips([0,1,2,3,4,5,6,7])
|
||||
|
||||
async def _transfer_one_to_many(
|
||||
self,
|
||||
source: Container,
|
||||
targets: Sequence[Container],
|
||||
tip_racks: Sequence[TipRack],
|
||||
use_channels: List[int],
|
||||
asp_vols: List[float],
|
||||
dis_vols: List[float],
|
||||
asp_flow_rates: Optional[List[Optional[float]]],
|
||||
dis_flow_rates: Optional[List[Optional[float]]],
|
||||
offsets: Optional[List[Coordinate]],
|
||||
touch_tip: bool,
|
||||
liquid_height: Optional[List[Optional[float]]],
|
||||
blow_out_air_volume: Optional[List[Optional[float]]],
|
||||
spread: Literal["wide", "tight", "custom"],
|
||||
mix_stage: Optional[Literal["none", "before", "after", "both"]],
|
||||
mix_times: Optional[int],
|
||||
mix_vol: Optional[int],
|
||||
mix_rate: Optional[int],
|
||||
mix_liquid_height: Optional[float],
|
||||
delays: Optional[List[int]],
|
||||
):
|
||||
"""一对多传输模式:1 source -> N targets"""
|
||||
# 验证和扩展体积参数
|
||||
if len(asp_vols) == 1:
|
||||
# 如果只提供一个吸液体积,计算总吸液体积(所有分液体积之和)
|
||||
total_asp_vol = sum(dis_vols)
|
||||
asp_vol = asp_vols[0] if asp_vols[0] >= total_asp_vol else total_asp_vol
|
||||
else:
|
||||
raise ValueError("For one-to-many mode, `asp_vols` should be a single value or list with one element.")
|
||||
|
||||
if len(dis_vols) != len(targets):
|
||||
raise ValueError(f"Length of `dis_vols` {len(dis_vols)} must match `targets` {len(targets)}.")
|
||||
|
||||
if len(use_channels) == 1:
|
||||
# 单通道模式:一次吸液,多次分液
|
||||
tip = []
|
||||
for _ in range(len(use_channels)):
|
||||
tip.extend(next(self.current_tip))
|
||||
await self.pick_up_tips(tip)
|
||||
|
||||
# 从源容器吸液(总体积)
|
||||
await self.aspirate(
|
||||
resources=[source],
|
||||
vols=[asp_vol],
|
||||
use_channels=use_channels,
|
||||
flow_rates=[asp_flow_rates[0]] if asp_flow_rates and len(asp_flow_rates) > 0 else None,
|
||||
offsets=[offsets[0]] if offsets and len(offsets) > 0 else None,
|
||||
liquid_height=[liquid_height[0]] if liquid_height and len(liquid_height) > 0 else None,
|
||||
blow_out_air_volume=[blow_out_air_volume[0]] if blow_out_air_volume and len(blow_out_air_volume) > 0 else None,
|
||||
spread=spread,
|
||||
)
|
||||
|
||||
if delays is not None:
|
||||
await self.custom_delay(seconds=delays[0])
|
||||
|
||||
# 分多次分液到不同的目标容器
|
||||
for idx, target in enumerate(targets):
|
||||
await self.dispense(
|
||||
resources=[target],
|
||||
vols=[dis_vols[idx]],
|
||||
use_channels=use_channels,
|
||||
flow_rates=[dis_flow_rates[idx]] if dis_flow_rates and len(dis_flow_rates) > idx else None,
|
||||
offsets=[offsets[idx]] if offsets and len(offsets) > idx else None,
|
||||
blow_out_air_volume=[blow_out_air_volume[idx]] if blow_out_air_volume and len(blow_out_air_volume) > idx else None,
|
||||
liquid_height=[liquid_height[idx]] if liquid_height and len(liquid_height) > idx else None,
|
||||
spread=spread,
|
||||
)
|
||||
if delays is not None and len(delays) > 1:
|
||||
await self.custom_delay(seconds=delays[1])
|
||||
if mix_stage in ["after", "both"] and mix_times is not None and mix_times > 0:
|
||||
await self.mix(
|
||||
targets=[target],
|
||||
mix_time=mix_times,
|
||||
mix_vol=mix_vol,
|
||||
offsets=offsets[idx:idx+1] if offsets else None,
|
||||
height_to_bottom=mix_liquid_height if mix_liquid_height else None,
|
||||
mix_rate=mix_rate if mix_rate else None,
|
||||
)
|
||||
if touch_tip:
|
||||
await self.touch_tip([target])
|
||||
|
||||
await self.discard_tips(use_channels=use_channels)
|
||||
|
||||
elif len(use_channels) == 8:
|
||||
# 8通道模式:需要确保目标数量是8的倍数
|
||||
if len(targets) % 8 != 0:
|
||||
raise ValueError(f"For 8-channel mode, number of targets {len(targets)} must be a multiple of 8.")
|
||||
|
||||
# 每次处理8个目标
|
||||
for i in range(0, len(targets), 8):
|
||||
tip = []
|
||||
for _ in range(len(use_channels)):
|
||||
tip.extend(next(self.current_tip))
|
||||
await self.pick_up_tips(tip)
|
||||
|
||||
current_targets = targets[i:i + 8]
|
||||
current_dis_vols = dis_vols[i:i + 8]
|
||||
|
||||
# 8个通道都从同一个源容器吸液,每个通道的吸液体积等于对应的分液体积
|
||||
current_asp_flow_rates = asp_flow_rates[0:1] * 8 if asp_flow_rates and len(asp_flow_rates) > 0 else None
|
||||
current_asp_offset = offsets[0:1] * 8 if offsets and len(offsets) > 0 else [None] * 8
|
||||
current_asp_liquid_height = liquid_height[0:1] * 8 if liquid_height and len(liquid_height) > 0 else [None] * 8
|
||||
current_asp_blow_out_air_volume = blow_out_air_volume[0:1] * 8 if blow_out_air_volume and len(blow_out_air_volume) > 0 else [None] * 8
|
||||
|
||||
# 从源容器吸液(8个通道都从同一个源,但每个通道的吸液体积不同)
|
||||
await self.aspirate(
|
||||
resources=[source] * 8, # 8个通道都从同一个源
|
||||
vols=current_dis_vols, # 每个通道的吸液体积等于对应的分液体积
|
||||
use_channels=use_channels,
|
||||
flow_rates=current_asp_flow_rates,
|
||||
offsets=current_asp_offset,
|
||||
liquid_height=current_asp_liquid_height,
|
||||
blow_out_air_volume=current_asp_blow_out_air_volume,
|
||||
spread=spread,
|
||||
)
|
||||
|
||||
if delays is not None:
|
||||
await self.custom_delay(seconds=delays[0])
|
||||
|
||||
# 分液到8个目标
|
||||
current_dis_flow_rates = dis_flow_rates[i:i + 8] if dis_flow_rates else None
|
||||
current_dis_offset = offsets[i:i + 8] if offsets else [None] * 8
|
||||
current_dis_liquid_height = liquid_height[i:i + 8] if liquid_height else [None] * 8
|
||||
current_dis_blow_out_air_volume = blow_out_air_volume[i:i + 8] if blow_out_air_volume else [None] * 8
|
||||
|
||||
await self.dispense(
|
||||
resources=current_targets,
|
||||
vols=current_dis_vols,
|
||||
use_channels=use_channels,
|
||||
flow_rates=current_dis_flow_rates,
|
||||
offsets=current_dis_offset,
|
||||
blow_out_air_volume=current_dis_blow_out_air_volume,
|
||||
liquid_height=current_dis_liquid_height,
|
||||
spread=spread,
|
||||
)
|
||||
|
||||
if delays is not None and len(delays) > 1:
|
||||
await self.custom_delay(seconds=delays[1])
|
||||
|
||||
if mix_stage in ["after", "both"] and mix_times is not None and mix_times > 0:
|
||||
await self.mix(
|
||||
targets=current_targets,
|
||||
mix_time=mix_times,
|
||||
mix_vol=mix_vol,
|
||||
offsets=offsets if offsets else None,
|
||||
height_to_bottom=mix_liquid_height if mix_liquid_height else None,
|
||||
mix_rate=mix_rate if mix_rate else None,
|
||||
)
|
||||
|
||||
if touch_tip:
|
||||
await self.touch_tip(current_targets)
|
||||
|
||||
await self.discard_tips([0,1,2,3,4,5,6,7])
|
||||
|
||||
async def _transfer_many_to_one(
|
||||
self,
|
||||
sources: Sequence[Container],
|
||||
target: Container,
|
||||
tip_racks: Sequence[TipRack],
|
||||
use_channels: List[int],
|
||||
asp_vols: List[float],
|
||||
dis_vols: List[float],
|
||||
asp_flow_rates: Optional[List[Optional[float]]],
|
||||
dis_flow_rates: Optional[List[Optional[float]]],
|
||||
offsets: Optional[List[Coordinate]],
|
||||
touch_tip: bool,
|
||||
liquid_height: Optional[List[Optional[float]]],
|
||||
blow_out_air_volume: Optional[List[Optional[float]]],
|
||||
spread: Literal["wide", "tight", "custom"],
|
||||
mix_stage: Optional[Literal["none", "before", "after", "both"]],
|
||||
mix_times: Optional[int],
|
||||
mix_vol: Optional[int],
|
||||
mix_rate: Optional[int],
|
||||
mix_liquid_height: Optional[float],
|
||||
delays: Optional[List[int]],
|
||||
):
|
||||
"""多对一传输模式:N sources -> 1 target(汇总/混合)"""
|
||||
# 验证和扩展体积参数
|
||||
if len(asp_vols) != len(sources):
|
||||
raise ValueError(f"Length of `asp_vols` {len(asp_vols)} must match `sources` {len(sources)}.")
|
||||
|
||||
# 支持两种模式:
|
||||
# 1. dis_vols 为单个值:所有源汇总,使用总吸液体积或指定分液体积
|
||||
# 2. dis_vols 长度等于 asp_vols:每个源按不同比例分液(按比例混合)
|
||||
if len(dis_vols) == 1:
|
||||
# 模式1:使用单个分液体积
|
||||
total_dis_vol = sum(asp_vols)
|
||||
dis_vol = dis_vols[0] if dis_vols[0] >= total_dis_vol else total_dis_vol
|
||||
use_proportional_mixing = False
|
||||
elif len(dis_vols) == len(asp_vols):
|
||||
# 模式2:按不同比例混合
|
||||
use_proportional_mixing = True
|
||||
else:
|
||||
raise ValueError(
|
||||
f"For many-to-one mode, `dis_vols` should be a single value or list with length {len(asp_vols)} "
|
||||
f"(matching `asp_vols`). Got length {len(dis_vols)}."
|
||||
)
|
||||
|
||||
if len(use_channels) == 1:
|
||||
# 单通道模式:多次吸液,一次分液
|
||||
# 先混合前(如果需要)
|
||||
if mix_stage in ["before", "both"] and mix_times is not None and mix_times > 0:
|
||||
# 注意:在吸液前混合源容器通常不常见,这里跳过
|
||||
pass
|
||||
|
||||
# 从每个源容器吸液并分液到目标容器
|
||||
for idx, source in enumerate(sources):
|
||||
tip = []
|
||||
for _ in range(len(use_channels)):
|
||||
tip.extend(next(self.current_tip))
|
||||
await self.pick_up_tips(tip)
|
||||
|
||||
await self.aspirate(
|
||||
resources=[source],
|
||||
vols=[asp_vols[idx]],
|
||||
use_channels=use_channels,
|
||||
flow_rates=[asp_flow_rates[idx]] if asp_flow_rates and len(asp_flow_rates) > idx else None,
|
||||
offsets=[offsets[idx]] if offsets and len(offsets) > idx else None,
|
||||
liquid_height=[liquid_height[idx]] if liquid_height and len(liquid_height) > idx else None,
|
||||
blow_out_air_volume=[blow_out_air_volume[idx]] if blow_out_air_volume and len(blow_out_air_volume) > idx else None,
|
||||
spread=spread,
|
||||
)
|
||||
|
||||
if delays is not None:
|
||||
await self.custom_delay(seconds=delays[0])
|
||||
|
||||
# 分液到目标容器
|
||||
if use_proportional_mixing:
|
||||
# 按不同比例混合:使用对应的 dis_vols
|
||||
dis_vol = dis_vols[idx]
|
||||
dis_flow_rate = dis_flow_rates[idx] if dis_flow_rates and len(dis_flow_rates) > idx else None
|
||||
dis_offset = offsets[idx] if offsets and len(offsets) > idx else None
|
||||
dis_liquid_height = liquid_height[idx] if liquid_height and len(liquid_height) > idx else None
|
||||
dis_blow_out = blow_out_air_volume[idx] if blow_out_air_volume and len(blow_out_air_volume) > idx else None
|
||||
else:
|
||||
# 标准模式:分液体积等于吸液体积
|
||||
dis_vol = asp_vols[idx]
|
||||
dis_flow_rate = dis_flow_rates[0] if dis_flow_rates and len(dis_flow_rates) > 0 else None
|
||||
dis_offset = offsets[0] if offsets and len(offsets) > 0 else None
|
||||
dis_liquid_height = liquid_height[0] if liquid_height and len(liquid_height) > 0 else None
|
||||
dis_blow_out = blow_out_air_volume[0] if blow_out_air_volume and len(blow_out_air_volume) > 0 else None
|
||||
|
||||
await self.dispense(
|
||||
resources=[target],
|
||||
vols=[dis_vol],
|
||||
use_channels=use_channels,
|
||||
flow_rates=[dis_flow_rate] if dis_flow_rate is not None else None,
|
||||
offsets=[dis_offset] if dis_offset is not None else None,
|
||||
blow_out_air_volume=[dis_blow_out] if dis_blow_out is not None else None,
|
||||
liquid_height=[dis_liquid_height] if dis_liquid_height is not None else None,
|
||||
spread=spread,
|
||||
)
|
||||
|
||||
if delays is not None and len(delays) > 1:
|
||||
await self.custom_delay(seconds=delays[1])
|
||||
|
||||
await self.discard_tips(use_channels=use_channels)
|
||||
|
||||
# 最后在目标容器中混合(如果需要)
|
||||
if mix_stage in ["after", "both"] and mix_times is not None and mix_times > 0:
|
||||
await self.mix(
|
||||
targets=[target],
|
||||
mix_time=mix_times,
|
||||
mix_vol=mix_vol,
|
||||
offsets=offsets[0:1] if offsets else None,
|
||||
height_to_bottom=mix_liquid_height if mix_liquid_height else None,
|
||||
mix_rate=mix_rate if mix_rate else None,
|
||||
)
|
||||
|
||||
if touch_tip:
|
||||
await self.touch_tip([target])
|
||||
|
||||
elif len(use_channels) == 8:
|
||||
# 8通道模式:需要确保源数量是8的倍数
|
||||
if len(sources) % 8 != 0:
|
||||
raise ValueError(f"For 8-channel mode, number of sources {len(sources)} must be a multiple of 8.")
|
||||
|
||||
# 每次处理8个源
|
||||
for i in range(0, len(sources), 8):
|
||||
tip = []
|
||||
for _ in range(len(use_channels)):
|
||||
tip.extend(next(self.current_tip))
|
||||
await self.pick_up_tips(tip)
|
||||
|
||||
current_sources = sources[i:i + 8]
|
||||
current_asp_vols = asp_vols[i:i + 8]
|
||||
current_asp_flow_rates = asp_flow_rates[i:i + 8] if asp_flow_rates else None
|
||||
current_asp_offset = offsets[i:i + 8] if offsets else [None] * 8
|
||||
current_asp_liquid_height = liquid_height[i:i + 8] if liquid_height else [None] * 8
|
||||
current_asp_blow_out_air_volume = blow_out_air_volume[i:i + 8] if blow_out_air_volume else [None] * 8
|
||||
|
||||
# 从8个源容器吸液
|
||||
await self.aspirate(
|
||||
resources=current_sources,
|
||||
vols=current_asp_vols,
|
||||
use_channels=use_channels,
|
||||
flow_rates=current_asp_flow_rates,
|
||||
offsets=current_asp_offset,
|
||||
blow_out_air_volume=current_asp_blow_out_air_volume,
|
||||
liquid_height=current_asp_liquid_height,
|
||||
spread=spread,
|
||||
)
|
||||
|
||||
if delays is not None:
|
||||
await self.custom_delay(seconds=delays[0])
|
||||
|
||||
# 分液到目标容器(每个通道分液到同一个目标)
|
||||
if use_proportional_mixing:
|
||||
# 按比例混合:使用对应的 dis_vols
|
||||
for i in range(0, len(targets), 8):
|
||||
# 取出8个任务
|
||||
tip = []
|
||||
for _ in range(len(use_channels)):
|
||||
tip.extend(next(self.current_tip))
|
||||
await self.pick_up_tips(tip)
|
||||
current_targets = targets[i:i + 8]
|
||||
current_reagent_sources = sources[i:i + 8]
|
||||
current_asp_vols = asp_vols[i:i + 8]
|
||||
current_dis_vols = dis_vols[i:i + 8]
|
||||
current_dis_flow_rates = dis_flow_rates[i:i + 8] if dis_flow_rates else None
|
||||
current_dis_offset = offsets[i:i + 8] if offsets else [None] * 8
|
||||
current_dis_liquid_height = liquid_height[i:i + 8] if liquid_height else [None] * 8
|
||||
current_dis_blow_out_air_volume = blow_out_air_volume[i:i + 8] if blow_out_air_volume else [None] * 8
|
||||
else:
|
||||
# 标准模式:每个通道分液体积等于其吸液体积
|
||||
current_dis_vols = current_asp_vols
|
||||
current_dis_flow_rates = dis_flow_rates[0:1] * 8 if dis_flow_rates else None
|
||||
current_dis_offset = offsets[0:1] * 8 if offsets else [None] * 8
|
||||
current_dis_liquid_height = liquid_height[0:1] * 8 if liquid_height else [None] * 8
|
||||
current_dis_blow_out_air_volume = blow_out_air_volume[0:1] * 8 if blow_out_air_volume else [None] * 8
|
||||
|
||||
await self.dispense(
|
||||
resources=[target] * 8, # 8个通道都分到同一个目标
|
||||
vols=current_dis_vols,
|
||||
use_channels=use_channels,
|
||||
flow_rates=current_dis_flow_rates,
|
||||
offsets=current_dis_offset,
|
||||
blow_out_air_volume=current_dis_blow_out_air_volume,
|
||||
liquid_height=current_dis_liquid_height,
|
||||
spread=spread,
|
||||
)
|
||||
|
||||
if delays is not None and len(delays) > 1:
|
||||
await self.custom_delay(seconds=delays[1])
|
||||
|
||||
await self.discard_tips([0,1,2,3,4,5,6,7])
|
||||
|
||||
# 最后在目标容器中混合(如果需要)
|
||||
if mix_stage in ["after", "both"] and mix_times is not None and mix_times > 0:
|
||||
await self.mix(
|
||||
targets=[target],
|
||||
mix_time=mix_times,
|
||||
mix_vol=mix_vol,
|
||||
offsets=offsets[0:1] if offsets else None,
|
||||
height_to_bottom=mix_liquid_height if mix_liquid_height else None,
|
||||
mix_rate=mix_rate if mix_rate else None,
|
||||
)
|
||||
|
||||
if touch_tip:
|
||||
await self.touch_tip([target])
|
||||
current_asp_flow_rates = asp_flow_rates[i:i + 8]
|
||||
current_asp_offset = offsets[i:i + 8] if offsets else [None] * 8
|
||||
current_dis_offset = offsets[-i*8-8:len(offsets)-i*8] if offsets else [None] * 8
|
||||
current_asp_liquid_height = liquid_height[i:i + 8] if liquid_height else [None] * 8
|
||||
current_dis_liquid_height = liquid_height[-i*8-8:len(liquid_height)-i*8] if liquid_height else [None] * 8
|
||||
current_asp_blow_out_air_volume = blow_out_air_volume[i:i + 8] if blow_out_air_volume else [None] * 8
|
||||
current_dis_blow_out_air_volume = blow_out_air_volume[-i*8-8:len(blow_out_air_volume)-i*8] if blow_out_air_volume else [None] * 8
|
||||
current_dis_flow_rates = dis_flow_rates[i:i + 8] if dis_flow_rates else [None] * 8
|
||||
|
||||
await self.aspirate(
|
||||
resources=current_reagent_sources,
|
||||
vols=current_asp_vols,
|
||||
use_channels=use_channels,
|
||||
flow_rates=current_asp_flow_rates,
|
||||
offsets=current_asp_offset,
|
||||
blow_out_air_volume=current_asp_blow_out_air_volume,
|
||||
liquid_height=current_asp_liquid_height,
|
||||
spread=spread,
|
||||
)
|
||||
|
||||
if delays is not None:
|
||||
await self.custom_delay(seconds=delays[0])
|
||||
await self.dispense(
|
||||
resources=current_targets,
|
||||
vols=current_dis_vols,
|
||||
use_channels=use_channels,
|
||||
flow_rates=current_dis_flow_rates,
|
||||
offsets=current_dis_offset,
|
||||
blow_out_air_volume=current_dis_blow_out_air_volume,
|
||||
liquid_height=current_dis_liquid_height,
|
||||
spread=spread,
|
||||
)
|
||||
if delays is not None:
|
||||
await self.custom_delay(seconds=delays[1])
|
||||
|
||||
await self.mix(
|
||||
targets=current_targets,
|
||||
mix_time=mix_times,
|
||||
mix_vol=mix_vol,
|
||||
offsets=offsets if offsets else None,
|
||||
height_to_bottom=mix_liquid_height if mix_liquid_height else None,
|
||||
mix_rate=mix_rate if mix_rate else None,
|
||||
)
|
||||
if delays is not None:
|
||||
await self.custom_delay(seconds=delays[1])
|
||||
await self.touch_tip(current_targets)
|
||||
await self.discard_tips([0,1,2,3,4,5,6,7])
|
||||
|
||||
# except Exception as e:
|
||||
# traceback.print_exc()
|
||||
|
||||
@@ -7,7 +7,7 @@ class VirtualMultiwayValve:
|
||||
"""
|
||||
虚拟九通阀门 - 0号位连接transfer pump,1-8号位连接其他设备 🔄
|
||||
"""
|
||||
def __init__(self, port: str = "VIRTUAL", positions: int = 8, **kwargs):
|
||||
def __init__(self, port: str = "VIRTUAL", positions: int = 8):
|
||||
self.port = port
|
||||
self.max_positions = positions # 1-8号位
|
||||
self.total_positions = positions + 1 # 0-8号位,共9个位置
|
||||
|
||||
@@ -192,23 +192,6 @@ class BioyondV1RPC(BaseRequest):
|
||||
return []
|
||||
return str(response.get("data", {}))
|
||||
|
||||
def material_type_list(self) -> list:
|
||||
"""查询物料类型列表
|
||||
|
||||
返回值:
|
||||
list: 物料类型数组,失败返回空列表
|
||||
"""
|
||||
response = self.post(
|
||||
url=f'{self.host}/api/lims/storage/material-type-list',
|
||||
params={
|
||||
"apiKey": self.api_key,
|
||||
"requestTime": self.get_current_time_iso8601(),
|
||||
"data": {},
|
||||
})
|
||||
if not response or response['code'] != 1:
|
||||
return []
|
||||
return response.get("data", [])
|
||||
|
||||
def material_inbound(self, material_id: str, location_id: str) -> dict:
|
||||
"""
|
||||
描述:指定库位入库一个物料
|
||||
@@ -238,26 +221,6 @@ class BioyondV1RPC(BaseRequest):
|
||||
# 入库成功时,即使没有 data 字段,也返回成功标识
|
||||
return response.get("data") or {"success": True}
|
||||
|
||||
def batch_inbound(self, inbound_items: List[Dict[str, Any]]) -> int:
|
||||
"""批量入库物料
|
||||
|
||||
参数:
|
||||
inbound_items: 入库条目列表,每项包含 materialId/locationId/quantity 等
|
||||
|
||||
返回值:
|
||||
int: 成功返回1,失败返回0
|
||||
"""
|
||||
response = self.post(
|
||||
url=f'{self.host}/api/lims/storage/batch-inbound',
|
||||
params={
|
||||
"apiKey": self.api_key,
|
||||
"requestTime": self.get_current_time_iso8601(),
|
||||
"data": inbound_items,
|
||||
})
|
||||
if not response or response['code'] != 1:
|
||||
return 0
|
||||
return response.get("code", 0)
|
||||
|
||||
def delete_material(self, material_id: str) -> dict:
|
||||
"""
|
||||
描述:删除尚未入库的物料
|
||||
@@ -326,66 +289,6 @@ class BioyondV1RPC(BaseRequest):
|
||||
return None
|
||||
return response
|
||||
|
||||
def batch_outbound(self, outbound_items: List[Dict[str, Any]]) -> int:
|
||||
"""批量出库物料
|
||||
|
||||
参数:
|
||||
outbound_items: 出库条目列表,每项包含 materialId/locationId/quantity 等
|
||||
|
||||
返回值:
|
||||
int: 成功返回1,失败返回0
|
||||
"""
|
||||
response = self.post(
|
||||
url=f'{self.host}/api/lims/storage/batch-outbound',
|
||||
params={
|
||||
"apiKey": self.api_key,
|
||||
"requestTime": self.get_current_time_iso8601(),
|
||||
"data": outbound_items,
|
||||
})
|
||||
if not response or response['code'] != 1:
|
||||
return 0
|
||||
return response.get("code", 0)
|
||||
|
||||
def material_info(self, material_id: str) -> dict:
|
||||
"""查询物料详情
|
||||
|
||||
参数:
|
||||
material_id: 物料ID
|
||||
|
||||
返回值:
|
||||
dict: 物料信息字典,失败返回空字典
|
||||
"""
|
||||
response = self.post(
|
||||
url=f'{self.host}/api/lims/storage/material-info',
|
||||
params={
|
||||
"apiKey": self.api_key,
|
||||
"requestTime": self.get_current_time_iso8601(),
|
||||
"data": material_id,
|
||||
})
|
||||
if not response or response['code'] != 1:
|
||||
return {}
|
||||
return response.get("data", {})
|
||||
|
||||
def reset_location(self, location_id: str) -> int:
|
||||
"""复位库位
|
||||
|
||||
参数:
|
||||
location_id: 库位ID
|
||||
|
||||
返回值:
|
||||
int: 成功返回1,失败返回0
|
||||
"""
|
||||
response = self.post(
|
||||
url=f'{self.host}/api/lims/storage/reset-location',
|
||||
params={
|
||||
"apiKey": self.api_key,
|
||||
"requestTime": self.get_current_time_iso8601(),
|
||||
"data": location_id,
|
||||
})
|
||||
if not response or response['code'] != 1:
|
||||
return 0
|
||||
return response.get("code", 0)
|
||||
|
||||
# ==================== 工作流查询相关接口 ====================
|
||||
|
||||
def query_workflow(self, json_str: str) -> dict:
|
||||
@@ -429,66 +332,6 @@ class BioyondV1RPC(BaseRequest):
|
||||
return {}
|
||||
return response.get("data", {})
|
||||
|
||||
def split_workflow_list(self, params: Dict[str, Any]) -> dict:
|
||||
"""查询可拆分工作流列表
|
||||
|
||||
参数:
|
||||
params: 查询条件参数
|
||||
|
||||
返回值:
|
||||
dict: 返回数据字典,失败返回空字典
|
||||
"""
|
||||
response = self.post(
|
||||
url=f'{self.host}/api/lims/workflow/split-workflow-list',
|
||||
params={
|
||||
"apiKey": self.api_key,
|
||||
"requestTime": self.get_current_time_iso8601(),
|
||||
"data": params,
|
||||
})
|
||||
if not response or response['code'] != 1:
|
||||
return {}
|
||||
return response.get("data", {})
|
||||
|
||||
def merge_workflow(self, data: Dict[str, Any]) -> dict:
|
||||
"""合并工作流(无参数版)
|
||||
|
||||
参数:
|
||||
data: 合并请求体,包含待合并的子工作流信息
|
||||
|
||||
返回值:
|
||||
dict: 合并结果,失败返回空字典
|
||||
"""
|
||||
response = self.post(
|
||||
url=f'{self.host}/api/lims/workflow/merge-workflow',
|
||||
params={
|
||||
"apiKey": self.api_key,
|
||||
"requestTime": self.get_current_time_iso8601(),
|
||||
"data": data,
|
||||
})
|
||||
if not response or response['code'] != 1:
|
||||
return {}
|
||||
return response.get("data", {})
|
||||
|
||||
def merge_workflow_with_parameters(self, data: Dict[str, Any]) -> dict:
|
||||
"""合并工作流(携带参数)
|
||||
|
||||
参数:
|
||||
data: 合并请求体,包含 name、workflows 以及 stepParameters 等
|
||||
|
||||
返回值:
|
||||
dict: 合并结果,失败返回空字典
|
||||
"""
|
||||
response = self.post(
|
||||
url=f'{self.host}/api/lims/workflow/merge-workflow-with-parameters',
|
||||
params={
|
||||
"apiKey": self.api_key,
|
||||
"requestTime": self.get_current_time_iso8601(),
|
||||
"data": data,
|
||||
})
|
||||
if not response or response['code'] != 1:
|
||||
return {}
|
||||
return response.get("data", {})
|
||||
|
||||
def validate_workflow_parameters(self, workflows: List[Dict[str, Any]]) -> Dict[str, Any]:
|
||||
"""验证工作流参数格式"""
|
||||
try:
|
||||
@@ -651,34 +494,35 @@ class BioyondV1RPC(BaseRequest):
|
||||
return {}
|
||||
return response.get("data", {})
|
||||
|
||||
def order_report(self, order_id: str) -> dict:
|
||||
"""查询订单报告
|
||||
|
||||
参数:
|
||||
order_id: 订单ID
|
||||
|
||||
返回值:
|
||||
dict: 报告数据,失败返回空字典
|
||||
def order_report(self, json_str: str) -> dict:
|
||||
"""
|
||||
描述:查询某个任务明细
|
||||
json_str 格式为JSON字符串:
|
||||
'{"order_id": "order123"}'
|
||||
"""
|
||||
try:
|
||||
data = json.loads(json_str)
|
||||
order_id = data.get("order_id", "")
|
||||
except json.JSONDecodeError:
|
||||
return {}
|
||||
|
||||
response = self.post(
|
||||
url=f'{self.host}/api/lims/order/order-report',
|
||||
url=f'{self.host}/api/lims/order/project-order-report',
|
||||
params={
|
||||
"apiKey": self.api_key,
|
||||
"requestTime": self.get_current_time_iso8601(),
|
||||
"data": order_id,
|
||||
})
|
||||
|
||||
if not response or response['code'] != 1:
|
||||
return {}
|
||||
return response.get("data", {})
|
||||
|
||||
def order_takeout(self, json_str: str) -> int:
|
||||
"""取出任务产物
|
||||
|
||||
参数:
|
||||
json_str: JSON字符串,包含 order_id 与 preintake_id
|
||||
|
||||
返回值:
|
||||
int: 成功返回1,失败返回0
|
||||
"""
|
||||
描述:取出任务产物
|
||||
json_str 格式为JSON字符串:
|
||||
'{"order_id": "order123", "preintake_id": "preintake123"}'
|
||||
"""
|
||||
try:
|
||||
data = json.loads(json_str)
|
||||
@@ -701,15 +545,14 @@ class BioyondV1RPC(BaseRequest):
|
||||
return 0
|
||||
return response.get("code", 0)
|
||||
|
||||
|
||||
def sample_waste_removal(self, order_id: str) -> dict:
|
||||
"""样品/废料取出
|
||||
"""
|
||||
样品/废料取出接口
|
||||
|
||||
参数:
|
||||
order_id: 订单ID
|
||||
- order_id: 订单ID
|
||||
|
||||
返回值:
|
||||
dict: 取出结果,失败返回空字典
|
||||
返回: 取出结果
|
||||
"""
|
||||
params = {"orderId": order_id}
|
||||
|
||||
@@ -731,13 +574,10 @@ class BioyondV1RPC(BaseRequest):
|
||||
return response.get("data", {})
|
||||
|
||||
def cancel_order(self, json_str: str) -> bool:
|
||||
"""取消指定任务
|
||||
|
||||
参数:
|
||||
json_str: JSON字符串,包含 order_id
|
||||
|
||||
返回值:
|
||||
bool: 成功返回 True,失败返回 False
|
||||
"""
|
||||
描述:取消指定任务
|
||||
json_str 格式为JSON字符串:
|
||||
'{"order_id": "order123"}'
|
||||
"""
|
||||
try:
|
||||
data = json.loads(json_str)
|
||||
@@ -757,126 +597,6 @@ class BioyondV1RPC(BaseRequest):
|
||||
return False
|
||||
return True
|
||||
|
||||
def cancel_experiment(self, order_id: str) -> int:
|
||||
"""取消指定实验
|
||||
|
||||
参数:
|
||||
order_id: 订单ID
|
||||
|
||||
返回值:
|
||||
int: 成功返回1,失败返回0
|
||||
"""
|
||||
response = self.post(
|
||||
url=f'{self.host}/api/lims/order/cancel-experiment',
|
||||
params={
|
||||
"apiKey": self.api_key,
|
||||
"requestTime": self.get_current_time_iso8601(),
|
||||
"data": order_id,
|
||||
})
|
||||
if not response or response['code'] != 1:
|
||||
return 0
|
||||
return response.get("code", 0)
|
||||
|
||||
def batch_cancel_experiment(self, order_ids: List[str]) -> int:
|
||||
"""批量取消实验
|
||||
|
||||
参数:
|
||||
order_ids: 订单ID列表
|
||||
|
||||
返回值:
|
||||
int: 成功返回1,失败返回0
|
||||
"""
|
||||
response = self.post(
|
||||
url=f'{self.host}/api/lims/order/batch-cancel-experiment',
|
||||
params={
|
||||
"apiKey": self.api_key,
|
||||
"requestTime": self.get_current_time_iso8601(),
|
||||
"data": order_ids,
|
||||
})
|
||||
if not response or response['code'] != 1:
|
||||
return 0
|
||||
return response.get("code", 0)
|
||||
|
||||
def gantts_by_order_id(self, order_id: str) -> dict:
|
||||
"""查询订单甘特图数据
|
||||
|
||||
参数:
|
||||
order_id: 订单ID
|
||||
|
||||
返回值:
|
||||
dict: 甘特数据,失败返回空字典
|
||||
"""
|
||||
response = self.post(
|
||||
url=f'{self.host}/api/lims/order/gantts-by-order-id',
|
||||
params={
|
||||
"apiKey": self.api_key,
|
||||
"requestTime": self.get_current_time_iso8601(),
|
||||
"data": order_id,
|
||||
})
|
||||
if not response or response['code'] != 1:
|
||||
return {}
|
||||
return response.get("data", {})
|
||||
|
||||
def simulation_gantt_by_order_id(self, order_id: str) -> dict:
|
||||
"""查询订单模拟甘特图数据
|
||||
|
||||
参数:
|
||||
order_id: 订单ID
|
||||
|
||||
返回值:
|
||||
dict: 模拟甘特数据,失败返回空字典
|
||||
"""
|
||||
response = self.post(
|
||||
url=f'{self.host}/api/lims/order/simulation-gantt-by-order-id',
|
||||
params={
|
||||
"apiKey": self.api_key,
|
||||
"requestTime": self.get_current_time_iso8601(),
|
||||
"data": order_id,
|
||||
})
|
||||
if not response or response['code'] != 1:
|
||||
return {}
|
||||
return response.get("data", {})
|
||||
|
||||
def reset_order_status(self, order_id: str) -> int:
|
||||
"""复位订单状态
|
||||
|
||||
参数:
|
||||
order_id: 订单ID
|
||||
|
||||
返回值:
|
||||
int: 成功返回1,失败返回0
|
||||
"""
|
||||
response = self.post(
|
||||
url=f'{self.host}/api/lims/order/reset-order-status',
|
||||
params={
|
||||
"apiKey": self.api_key,
|
||||
"requestTime": self.get_current_time_iso8601(),
|
||||
"data": order_id,
|
||||
})
|
||||
if not response or response['code'] != 1:
|
||||
return 0
|
||||
return response.get("code", 0)
|
||||
|
||||
def gantt_with_simulation_by_order_id(self, order_id: str) -> dict:
|
||||
"""查询订单甘特与模拟联合数据
|
||||
|
||||
参数:
|
||||
order_id: 订单ID
|
||||
|
||||
返回值:
|
||||
dict: 联合数据,失败返回空字典
|
||||
"""
|
||||
response = self.post(
|
||||
url=f'{self.host}/api/lims/order/gantt-with-simulation-by-order-id',
|
||||
params={
|
||||
"apiKey": self.api_key,
|
||||
"requestTime": self.get_current_time_iso8601(),
|
||||
"data": order_id,
|
||||
})
|
||||
if not response or response['code'] != 1:
|
||||
return {}
|
||||
return response.get("data", {})
|
||||
|
||||
# ==================== 设备管理相关接口 ====================
|
||||
|
||||
def device_list(self, json_str: str = "") -> list:
|
||||
@@ -908,13 +628,9 @@ class BioyondV1RPC(BaseRequest):
|
||||
return response.get("data", [])
|
||||
|
||||
def device_operation(self, json_str: str) -> int:
|
||||
"""设备操作
|
||||
|
||||
参数:
|
||||
json_str: JSON字符串,包含 device_no/operationType/operationParams
|
||||
|
||||
返回值:
|
||||
int: 成功返回1,失败返回0
|
||||
"""
|
||||
描述:操作设备
|
||||
json_str 格式为JSON字符串
|
||||
"""
|
||||
try:
|
||||
data = json.loads(json_str)
|
||||
@@ -927,7 +643,7 @@ class BioyondV1RPC(BaseRequest):
|
||||
return 0
|
||||
|
||||
response = self.post(
|
||||
url=f'{self.host}/api/lims/device/execute-operation',
|
||||
url=f'{self.host}/api/lims/device/device-operation',
|
||||
params={
|
||||
"apiKey": self.api_key,
|
||||
"requestTime": self.get_current_time_iso8601(),
|
||||
@@ -938,30 +654,9 @@ class BioyondV1RPC(BaseRequest):
|
||||
return 0
|
||||
return response.get("code", 0)
|
||||
|
||||
def reset_devices(self) -> int:
|
||||
"""复位设备集合
|
||||
|
||||
返回值:
|
||||
int: 成功返回1,失败返回0
|
||||
"""
|
||||
response = self.post(
|
||||
url=f'{self.host}/api/lims/device/reset-devices',
|
||||
params={
|
||||
"apiKey": self.api_key,
|
||||
"requestTime": self.get_current_time_iso8601(),
|
||||
})
|
||||
if not response or response['code'] != 1:
|
||||
return 0
|
||||
return response.get("code", 0)
|
||||
|
||||
# ==================== 调度器相关接口 ====================
|
||||
|
||||
def scheduler_status(self) -> dict:
|
||||
"""查询调度器状态
|
||||
|
||||
返回值:
|
||||
dict: 包含 schedulerStatus/hasTask/creationTime 等
|
||||
"""
|
||||
response = self.post(
|
||||
url=f'{self.host}/api/lims/scheduler/scheduler-status',
|
||||
params={
|
||||
@@ -974,7 +669,7 @@ class BioyondV1RPC(BaseRequest):
|
||||
return response.get("data", {})
|
||||
|
||||
def scheduler_start(self) -> int:
|
||||
"""启动调度器"""
|
||||
"""描述:启动调度器"""
|
||||
response = self.post(
|
||||
url=f'{self.host}/api/lims/scheduler/start',
|
||||
params={
|
||||
@@ -987,7 +682,7 @@ class BioyondV1RPC(BaseRequest):
|
||||
return response.get("code", 0)
|
||||
|
||||
def scheduler_pause(self) -> int:
|
||||
"""暂停调度器"""
|
||||
"""描述:暂停调度器"""
|
||||
response = self.post(
|
||||
url=f'{self.host}/api/lims/scheduler/pause',
|
||||
params={
|
||||
@@ -999,21 +694,8 @@ class BioyondV1RPC(BaseRequest):
|
||||
return 0
|
||||
return response.get("code", 0)
|
||||
|
||||
def scheduler_smart_pause(self) -> int:
|
||||
"""智能暂停调度器"""
|
||||
response = self.post(
|
||||
url=f'{self.host}/api/lims/scheduler/smart-pause',
|
||||
params={
|
||||
"apiKey": self.api_key,
|
||||
"requestTime": self.get_current_time_iso8601(),
|
||||
})
|
||||
|
||||
if not response or response['code'] != 1:
|
||||
return 0
|
||||
return response.get("code", 0)
|
||||
|
||||
def scheduler_continue(self) -> int:
|
||||
"""继续调度器"""
|
||||
"""描述:继续调度器"""
|
||||
response = self.post(
|
||||
url=f'{self.host}/api/lims/scheduler/continue',
|
||||
params={
|
||||
@@ -1026,7 +708,7 @@ class BioyondV1RPC(BaseRequest):
|
||||
return response.get("code", 0)
|
||||
|
||||
def scheduler_stop(self) -> int:
|
||||
"""停止调度器"""
|
||||
"""描述:停止调度器"""
|
||||
response = self.post(
|
||||
url=f'{self.host}/api/lims/scheduler/stop',
|
||||
params={
|
||||
@@ -1039,7 +721,7 @@ class BioyondV1RPC(BaseRequest):
|
||||
return response.get("code", 0)
|
||||
|
||||
def scheduler_reset(self) -> int:
|
||||
"""复位调度器"""
|
||||
"""描述:复位调度器"""
|
||||
response = self.post(
|
||||
url=f'{self.host}/api/lims/scheduler/reset',
|
||||
params={
|
||||
@@ -1051,26 +733,6 @@ class BioyondV1RPC(BaseRequest):
|
||||
return 0
|
||||
return response.get("code", 0)
|
||||
|
||||
def scheduler_reply_error_handling(self, data: Dict[str, Any]) -> int:
|
||||
"""调度错误处理回复
|
||||
|
||||
参数:
|
||||
data: 错误处理参数
|
||||
|
||||
返回值:
|
||||
int: 成功返回1,失败返回0
|
||||
"""
|
||||
response = self.post(
|
||||
url=f'{self.host}/api/lims/scheduler/reply-error-handling',
|
||||
params={
|
||||
"apiKey": self.api_key,
|
||||
"requestTime": self.get_current_time_iso8601(),
|
||||
"data": data,
|
||||
})
|
||||
if not response or response['code'] != 1:
|
||||
return 0
|
||||
return response.get("code", 0)
|
||||
|
||||
# ==================== 辅助方法 ====================
|
||||
|
||||
def _load_material_cache(self):
|
||||
@@ -1134,23 +796,3 @@ class BioyondV1RPC(BaseRequest):
|
||||
def get_available_materials(self):
|
||||
"""获取所有可用的材料名称列表"""
|
||||
return list(self.material_cache.keys())
|
||||
|
||||
def get_scheduler_state(self) -> Optional[MachineState]:
|
||||
"""将调度状态字符串映射为枚举值
|
||||
|
||||
返回值:
|
||||
Optional[MachineState]: 映射后的枚举,失败返回 None
|
||||
"""
|
||||
data = self.scheduler_status()
|
||||
if not isinstance(data, dict):
|
||||
return None
|
||||
status = data.get("schedulerStatus")
|
||||
mapping = {
|
||||
"Init": MachineState.INITIAL,
|
||||
"Stop": MachineState.STOPPED,
|
||||
"Running": MachineState.RUNNING,
|
||||
"Pause": MachineState.PAUSED,
|
||||
"ErrorPause": MachineState.ERROR_PAUSED,
|
||||
"ErrorStop": MachineState.ERROR_STOPPED,
|
||||
}
|
||||
return mapping.get(status)
|
||||
|
||||
@@ -1,25 +1,10 @@
|
||||
from datetime import datetime
|
||||
import json
|
||||
import time
|
||||
from typing import Optional, Dict, Any, List
|
||||
from typing_extensions import TypedDict
|
||||
import requests
|
||||
from unilabos.devices.workstation.bioyond_studio.config import API_CONFIG
|
||||
from typing import Optional, Dict, Any
|
||||
|
||||
from unilabos.devices.workstation.bioyond_studio.bioyond_rpc import BioyondException
|
||||
from unilabos.devices.workstation.bioyond_studio.station import BioyondWorkstation
|
||||
from unilabos.ros.nodes.base_device_node import ROS2DeviceNode, BaseROS2DeviceNode
|
||||
import json
|
||||
import sys
|
||||
from pathlib import Path
|
||||
import importlib
|
||||
|
||||
class ComputeExperimentDesignReturn(TypedDict):
|
||||
solutions: list
|
||||
titration: dict
|
||||
solvents: dict
|
||||
feeding_order: list
|
||||
return_info: str
|
||||
|
||||
|
||||
class BioyondDispensingStation(BioyondWorkstation):
|
||||
@@ -43,108 +28,6 @@ class BioyondDispensingStation(BioyondWorkstation):
|
||||
# 用于跟踪任务完成状态的字典: {orderCode: {status, order_id, timestamp}}
|
||||
self.order_completion_status = {}
|
||||
|
||||
def _post_project_api(self, endpoint: str, data: Any) -> Dict[str, Any]:
|
||||
"""项目接口通用POST调用
|
||||
|
||||
参数:
|
||||
endpoint: 接口路径(例如 /api/lims/order/brief-step-paramerers)
|
||||
data: 请求体中的 data 字段内容
|
||||
|
||||
返回:
|
||||
dict: 服务端响应,失败时返回 {code:0,message,...}
|
||||
"""
|
||||
request_data = {
|
||||
"apiKey": API_CONFIG["api_key"],
|
||||
"requestTime": self.hardware_interface.get_current_time_iso8601(),
|
||||
"data": data
|
||||
}
|
||||
try:
|
||||
response = requests.post(
|
||||
f"{self.hardware_interface.host}{endpoint}",
|
||||
json=request_data,
|
||||
headers={"Content-Type": "application/json"},
|
||||
timeout=30
|
||||
)
|
||||
result = response.json()
|
||||
return result if isinstance(result, dict) else {"code": 0, "message": "非JSON响应"}
|
||||
except json.JSONDecodeError:
|
||||
return {"code": 0, "message": "非JSON响应"}
|
||||
except requests.exceptions.Timeout:
|
||||
return {"code": 0, "message": "请求超时"}
|
||||
except requests.exceptions.RequestException as e:
|
||||
return {"code": 0, "message": str(e)}
|
||||
|
||||
def _delete_project_api(self, endpoint: str, data: Any) -> Dict[str, Any]:
|
||||
"""项目接口通用DELETE调用
|
||||
|
||||
参数:
|
||||
endpoint: 接口路径(例如 /api/lims/order/workflows)
|
||||
data: 请求体中的 data 字段内容
|
||||
|
||||
返回:
|
||||
dict: 服务端响应,失败时返回 {code:0,message,...}
|
||||
"""
|
||||
request_data = {
|
||||
"apiKey": API_CONFIG["api_key"],
|
||||
"requestTime": self.hardware_interface.get_current_time_iso8601(),
|
||||
"data": data
|
||||
}
|
||||
try:
|
||||
response = requests.delete(
|
||||
f"{self.hardware_interface.host}{endpoint}",
|
||||
json=request_data,
|
||||
headers={"Content-Type": "application/json"},
|
||||
timeout=30
|
||||
)
|
||||
result = response.json()
|
||||
return result if isinstance(result, dict) else {"code": 0, "message": "非JSON响应"}
|
||||
except json.JSONDecodeError:
|
||||
return {"code": 0, "message": "非JSON响应"}
|
||||
except requests.exceptions.Timeout:
|
||||
return {"code": 0, "message": "请求超时"}
|
||||
except requests.exceptions.RequestException as e:
|
||||
return {"code": 0, "message": str(e)}
|
||||
|
||||
def compute_experiment_design(
|
||||
self,
|
||||
ratio: dict,
|
||||
wt_percent: str = "0.25",
|
||||
m_tot: str = "70",
|
||||
titration_percent: str = "0.03",
|
||||
) -> ComputeExperimentDesignReturn:
|
||||
try:
|
||||
if isinstance(ratio, str):
|
||||
try:
|
||||
ratio = json.loads(ratio)
|
||||
except Exception:
|
||||
ratio = {}
|
||||
root = str(Path(__file__).resolve().parents[3])
|
||||
if root not in sys.path:
|
||||
sys.path.append(root)
|
||||
try:
|
||||
mod = importlib.import_module("tem.compute")
|
||||
except Exception as e:
|
||||
raise BioyondException(f"无法导入计算模块: {e}")
|
||||
try:
|
||||
wp = float(wt_percent) if isinstance(wt_percent, str) else wt_percent
|
||||
mt = float(m_tot) if isinstance(m_tot, str) else m_tot
|
||||
tp = float(titration_percent) if isinstance(titration_percent, str) else titration_percent
|
||||
except Exception as e:
|
||||
raise BioyondException(f"参数解析失败: {e}")
|
||||
res = mod.generate_experiment_design(ratio=ratio, wt_percent=wp, m_tot=mt, titration_percent=tp)
|
||||
out = {
|
||||
"solutions": res.get("solutions", []),
|
||||
"titration": res.get("titration", {}),
|
||||
"solvents": res.get("solvents", {}),
|
||||
"feeding_order": res.get("feeding_order", []),
|
||||
"return_info": json.dumps(res, ensure_ascii=False)
|
||||
}
|
||||
return out
|
||||
except BioyondException:
|
||||
raise
|
||||
except Exception as e:
|
||||
raise BioyondException(str(e))
|
||||
|
||||
# 90%10%小瓶投料任务创建方法
|
||||
def create_90_10_vial_feeding_task(self,
|
||||
order_name: str = None,
|
||||
@@ -766,40 +649,6 @@ class BioyondDispensingStation(BioyondWorkstation):
|
||||
self.hardware_interface._logger.error(error_msg)
|
||||
raise BioyondException(error_msg)
|
||||
|
||||
def brief_step_parameters(self, data: Dict[str, Any]) -> Dict[str, Any]:
|
||||
"""获取简要步骤参数(站点项目接口)
|
||||
|
||||
参数:
|
||||
data: 查询参数字典
|
||||
|
||||
返回值:
|
||||
dict: 接口返回数据
|
||||
"""
|
||||
return self._post_project_api("/api/lims/order/brief-step-paramerers", data)
|
||||
|
||||
def project_order_report(self, order_id: str) -> Dict[str, Any]:
|
||||
"""查询项目端订单报告(兼容旧路径)
|
||||
|
||||
参数:
|
||||
order_id: 订单ID
|
||||
|
||||
返回值:
|
||||
dict: 报告数据
|
||||
"""
|
||||
return self._post_project_api("/api/lims/order/project-order-report", order_id)
|
||||
|
||||
def workflow_sample_locations(self, workflow_id: str) -> Dict[str, Any]:
|
||||
"""查询工作流样品库位(站点项目接口)
|
||||
|
||||
参数:
|
||||
workflow_id: 工作流ID
|
||||
|
||||
返回值:
|
||||
dict: 位置信息数据
|
||||
"""
|
||||
return self._post_project_api("/api/lims/storage/workflow-sample-locations", workflow_id)
|
||||
|
||||
|
||||
# 批量创建90%10%小瓶投料任务
|
||||
def batch_create_90_10_vial_feeding_tasks(self,
|
||||
titration,
|
||||
@@ -930,38 +779,8 @@ class BioyondDispensingStation(BioyondWorkstation):
|
||||
self.hardware_interface._logger.error(error_msg)
|
||||
raise BioyondException(error_msg)
|
||||
|
||||
def _extract_actuals_from_report(self, report) -> Dict[str, Any]:
|
||||
data = report.get('data') if isinstance(report, dict) else None
|
||||
actual_target_weigh = None
|
||||
actual_volume = None
|
||||
if data:
|
||||
extra = data.get('extraProperties') or {}
|
||||
if isinstance(extra, dict):
|
||||
for v in extra.values():
|
||||
obj = None
|
||||
try:
|
||||
obj = json.loads(v) if isinstance(v, str) else v
|
||||
except Exception:
|
||||
obj = None
|
||||
if isinstance(obj, dict):
|
||||
tw = obj.get('targetWeigh')
|
||||
vol = obj.get('volume')
|
||||
if tw is not None:
|
||||
try:
|
||||
actual_target_weigh = float(tw)
|
||||
except Exception:
|
||||
pass
|
||||
if vol is not None:
|
||||
try:
|
||||
actual_volume = float(vol)
|
||||
except Exception:
|
||||
pass
|
||||
return {
|
||||
'actualTargetWeigh': actual_target_weigh,
|
||||
'actualVolume': actual_volume
|
||||
}
|
||||
|
||||
# 等待多个任务完成并获取实验报告
|
||||
|
||||
def wait_for_multiple_orders_and_get_reports(self,
|
||||
batch_create_result: str = None,
|
||||
timeout: int = 7200,
|
||||
@@ -1083,7 +902,6 @@ class BioyondDispensingStation(BioyondWorkstation):
|
||||
"status": "timeout",
|
||||
"completion_status": None,
|
||||
"report": None,
|
||||
"extracted": None,
|
||||
"elapsed_time": elapsed_time
|
||||
})
|
||||
|
||||
@@ -1103,7 +921,8 @@ class BioyondDispensingStation(BioyondWorkstation):
|
||||
|
||||
# 获取实验报告
|
||||
try:
|
||||
report = self.project_order_report(order_id)
|
||||
report_query = json.dumps({"order_id": order_id})
|
||||
report = self.hardware_interface.order_report(report_query)
|
||||
|
||||
if not report:
|
||||
self.hardware_interface._logger.warning(
|
||||
@@ -1121,7 +940,6 @@ class BioyondDispensingStation(BioyondWorkstation):
|
||||
"status": "completed",
|
||||
"completion_status": completion_info.get('status'),
|
||||
"report": report,
|
||||
"extracted": self._extract_actuals_from_report(report),
|
||||
"elapsed_time": elapsed_time
|
||||
})
|
||||
|
||||
@@ -1141,7 +959,6 @@ class BioyondDispensingStation(BioyondWorkstation):
|
||||
"status": "error",
|
||||
"completion_status": completion_info.get('status'),
|
||||
"report": None,
|
||||
"extracted": None,
|
||||
"error": str(e),
|
||||
"elapsed_time": elapsed_time
|
||||
})
|
||||
@@ -1235,266 +1052,6 @@ class BioyondDispensingStation(BioyondWorkstation):
|
||||
self.hardware_interface._logger.error(f"处理任务完成报送失败: {e}")
|
||||
return {"processed": False, "error": str(e)}
|
||||
|
||||
def transfer_materials_to_reaction_station(
|
||||
self,
|
||||
target_device_id: str,
|
||||
transfer_groups: list
|
||||
) -> dict:
|
||||
"""
|
||||
将配液站完成的物料转移到指定反应站的堆栈库位
|
||||
支持多组转移任务,每组包含物料名称、目标堆栈和目标库位
|
||||
|
||||
Args:
|
||||
target_device_id: 目标反应站设备ID(所有转移组使用同一个设备)
|
||||
transfer_groups: 转移任务组列表,每组包含:
|
||||
- materials: 物料名称(字符串,将通过RPC查询)
|
||||
- target_stack: 目标堆栈名称(如"堆栈1左")
|
||||
- target_sites: 目标库位(如"A01")
|
||||
|
||||
Returns:
|
||||
dict: 转移结果
|
||||
{
|
||||
"success": bool,
|
||||
"total_groups": int,
|
||||
"successful_groups": int,
|
||||
"failed_groups": int,
|
||||
"target_device_id": str,
|
||||
"details": [...]
|
||||
}
|
||||
"""
|
||||
try:
|
||||
# 验证参数
|
||||
if not target_device_id:
|
||||
raise ValueError("目标设备ID不能为空")
|
||||
|
||||
if not transfer_groups:
|
||||
raise ValueError("转移任务组列表不能为空")
|
||||
|
||||
if not isinstance(transfer_groups, list):
|
||||
raise ValueError("transfer_groups必须是列表类型")
|
||||
|
||||
# 标准化设备ID格式: 确保以 /devices/ 开头
|
||||
if not target_device_id.startswith("/devices/"):
|
||||
if target_device_id.startswith("/"):
|
||||
target_device_id = f"/devices{target_device_id}"
|
||||
else:
|
||||
target_device_id = f"/devices/{target_device_id}"
|
||||
|
||||
self.hardware_interface._logger.info(
|
||||
f"目标设备ID标准化为: {target_device_id}"
|
||||
)
|
||||
|
||||
self.hardware_interface._logger.info(
|
||||
f"开始执行批量物料转移: {len(transfer_groups)}组任务 -> {target_device_id}"
|
||||
)
|
||||
|
||||
from .config import WAREHOUSE_MAPPING
|
||||
results = []
|
||||
successful_count = 0
|
||||
failed_count = 0
|
||||
|
||||
for idx, group in enumerate(transfer_groups, 1):
|
||||
try:
|
||||
# 提取参数
|
||||
material_name = group.get("materials", "")
|
||||
target_stack = group.get("target_stack", "")
|
||||
target_sites = group.get("target_sites", "")
|
||||
|
||||
# 验证必填参数
|
||||
if not material_name:
|
||||
raise ValueError(f"第{idx}组: 物料名称不能为空")
|
||||
if not target_stack:
|
||||
raise ValueError(f"第{idx}组: 目标堆栈不能为空")
|
||||
if not target_sites:
|
||||
raise ValueError(f"第{idx}组: 目标库位不能为空")
|
||||
|
||||
self.hardware_interface._logger.info(
|
||||
f"处理第{idx}组转移: {material_name} -> "
|
||||
f"{target_device_id}/{target_stack}/{target_sites}"
|
||||
)
|
||||
|
||||
# 通过物料名称从deck获取ResourcePLR对象
|
||||
try:
|
||||
material_resource = self.deck.get_resource(material_name)
|
||||
if not material_resource:
|
||||
raise ValueError(f"在deck中未找到物料: {material_name}")
|
||||
|
||||
self.hardware_interface._logger.info(
|
||||
f"从deck获取到物料 {material_name}: {material_resource}"
|
||||
)
|
||||
except Exception as e:
|
||||
raise ValueError(
|
||||
f"获取物料 {material_name} 失败: {str(e)},请确认物料已正确加载到deck中"
|
||||
)
|
||||
|
||||
# 验证目标堆栈是否存在
|
||||
if target_stack not in WAREHOUSE_MAPPING:
|
||||
raise ValueError(
|
||||
f"未知的堆栈名称: {target_stack},"
|
||||
f"可选值: {list(WAREHOUSE_MAPPING.keys())}"
|
||||
)
|
||||
|
||||
# 验证库位是否有效
|
||||
stack_sites = WAREHOUSE_MAPPING[target_stack].get("site_uuids", {})
|
||||
if target_sites not in stack_sites:
|
||||
raise ValueError(
|
||||
f"库位 {target_sites} 不存在于堆栈 {target_stack} 中,"
|
||||
f"可选库位: {list(stack_sites.keys())}"
|
||||
)
|
||||
|
||||
# 获取目标库位的UUID
|
||||
target_site_uuid = stack_sites[target_sites]
|
||||
if not target_site_uuid:
|
||||
raise ValueError(
|
||||
f"库位 {target_sites} 的 UUID 未配置,请在 WAREHOUSE_MAPPING 中完善"
|
||||
)
|
||||
|
||||
# 目标位点(包含UUID)
|
||||
future = ROS2DeviceNode.run_async_func(
|
||||
self._ros_node.get_resource_with_dir,
|
||||
True,
|
||||
**{
|
||||
"resource_id": f"/reaction_station_bioyond/Bioyond_Deck/{target_stack}",
|
||||
"with_children": True,
|
||||
},
|
||||
)
|
||||
# 等待异步完成后再获取结果
|
||||
if not future:
|
||||
raise ValueError(f"获取目标堆栈资源future无效: {target_stack}")
|
||||
while not future.done():
|
||||
time.sleep(0.1)
|
||||
target_site_resource = future.result()
|
||||
|
||||
# 调用父类的 transfer_resource_to_another 方法
|
||||
# 传入ResourcePLR对象和目标位点资源
|
||||
future = self.transfer_resource_to_another(
|
||||
resource=[material_resource],
|
||||
mount_resource=[target_site_resource],
|
||||
sites=[target_sites],
|
||||
mount_device_id=target_device_id
|
||||
)
|
||||
|
||||
# 等待异步任务完成(轮询直到完成,再取结果)
|
||||
if future:
|
||||
try:
|
||||
while not future.done():
|
||||
time.sleep(0.1)
|
||||
future.result()
|
||||
self.hardware_interface._logger.info(
|
||||
f"异步转移任务已完成: {material_name}"
|
||||
)
|
||||
except Exception as e:
|
||||
raise ValueError(f"转移任务执行失败: {str(e)}")
|
||||
|
||||
self.hardware_interface._logger.info(
|
||||
f"第{idx}组转移成功: {material_name} -> "
|
||||
f"{target_device_id}/{target_stack}/{target_sites}"
|
||||
)
|
||||
|
||||
successful_count += 1
|
||||
results.append({
|
||||
"group_index": idx,
|
||||
"success": True,
|
||||
"material_name": material_name,
|
||||
"target_stack": target_stack,
|
||||
"target_site": target_sites,
|
||||
"message": "转移成功"
|
||||
})
|
||||
|
||||
except Exception as e:
|
||||
error_msg = f"第{idx}组转移失败: {str(e)}"
|
||||
self.hardware_interface._logger.error(error_msg)
|
||||
failed_count += 1
|
||||
results.append({
|
||||
"group_index": idx,
|
||||
"success": False,
|
||||
"material_name": group.get("materials", ""),
|
||||
"error": str(e)
|
||||
})
|
||||
|
||||
# 返回汇总结果
|
||||
return {
|
||||
"success": failed_count == 0,
|
||||
"total_groups": len(transfer_groups),
|
||||
"successful_groups": successful_count,
|
||||
"failed_groups": failed_count,
|
||||
"target_device_id": target_device_id,
|
||||
"details": results,
|
||||
"message": f"完成 {len(transfer_groups)} 组转移任务到 {target_device_id}: "
|
||||
f"{successful_count} 成功, {failed_count} 失败"
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
error_msg = f"批量转移物料失败: {str(e)}"
|
||||
self.hardware_interface._logger.error(error_msg)
|
||||
return {
|
||||
"success": False,
|
||||
"total_groups": len(transfer_groups) if transfer_groups else 0,
|
||||
"successful_groups": 0,
|
||||
"failed_groups": len(transfer_groups) if transfer_groups else 0,
|
||||
"target_device_id": target_device_id if target_device_id else "",
|
||||
"error": error_msg
|
||||
}
|
||||
|
||||
def query_resource_by_name(self, material_name: str):
|
||||
"""
|
||||
通过物料名称查询资源对象(适用于Bioyond系统)
|
||||
|
||||
Args:
|
||||
material_name: 物料名称
|
||||
|
||||
Returns:
|
||||
物料ID或None
|
||||
"""
|
||||
try:
|
||||
# Bioyond系统使用material_cache存储物料信息
|
||||
if not hasattr(self.hardware_interface, 'material_cache'):
|
||||
self.hardware_interface._logger.error(
|
||||
"hardware_interface没有material_cache属性"
|
||||
)
|
||||
return None
|
||||
|
||||
material_cache = self.hardware_interface.material_cache
|
||||
|
||||
self.hardware_interface._logger.info(
|
||||
f"查询物料 '{material_name}', 缓存中共有 {len(material_cache)} 个物料"
|
||||
)
|
||||
|
||||
# 调试: 打印前几个物料信息
|
||||
if material_cache:
|
||||
cache_items = list(material_cache.items())[:5]
|
||||
for name, material_id in cache_items:
|
||||
self.hardware_interface._logger.debug(
|
||||
f"缓存物料: name={name}, id={material_id}"
|
||||
)
|
||||
|
||||
# 直接从缓存中查找
|
||||
if material_name in material_cache:
|
||||
material_id = material_cache[material_name]
|
||||
self.hardware_interface._logger.info(
|
||||
f"找到物料: {material_name} -> ID: {material_id}"
|
||||
)
|
||||
return material_id
|
||||
|
||||
self.hardware_interface._logger.warning(
|
||||
f"未找到物料: {material_name} (缓存中无此物料)"
|
||||
)
|
||||
|
||||
# 打印所有可用物料名称供参考
|
||||
available_materials = list(material_cache.keys())
|
||||
if available_materials:
|
||||
self.hardware_interface._logger.info(
|
||||
f"可用物料列表(前10个): {available_materials[:10]}"
|
||||
)
|
||||
|
||||
return None
|
||||
|
||||
except Exception as e:
|
||||
self.hardware_interface._logger.error(
|
||||
f"查询物料失败 {material_name}: {str(e)}"
|
||||
)
|
||||
return None
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
bioyond = BioyondDispensingStation(config={
|
||||
|
||||
@@ -5,7 +5,6 @@ from typing import List, Dict, Any
|
||||
from pathlib import Path
|
||||
from datetime import datetime
|
||||
from unilabos.devices.workstation.bioyond_studio.station import BioyondWorkstation
|
||||
from unilabos.devices.workstation.bioyond_studio.bioyond_rpc import MachineState
|
||||
from unilabos.ros.msgs.message_converter import convert_to_ros_msg, Float64, String
|
||||
from unilabos.devices.workstation.bioyond_studio.config import (
|
||||
WORKFLOW_STEP_IDS,
|
||||
@@ -718,7 +717,8 @@ class BioyondReactionStation(BioyondWorkstation):
|
||||
if oc in self.order_completion_status:
|
||||
info = self.order_completion_status[oc]
|
||||
try:
|
||||
rep = self.hardware_interface.order_report(oid)
|
||||
rq = json.dumps({"order_id": oid})
|
||||
rep = self.hardware_interface.order_report(rq)
|
||||
if not rep:
|
||||
rep = {"error": "无法获取报告"}
|
||||
reports.append({
|
||||
@@ -912,106 +912,6 @@ class BioyondReactionStation(BioyondWorkstation):
|
||||
"""
|
||||
return self.hardware_interface.create_order(json_str)
|
||||
|
||||
def hard_delete_merged_workflows(self, workflow_ids: List[str]) -> Dict[str, Any]:
|
||||
"""
|
||||
调用新接口:硬删除合并后的工作流
|
||||
|
||||
Args:
|
||||
workflow_ids: 要删除的工作流ID数组
|
||||
|
||||
Returns:
|
||||
删除结果
|
||||
"""
|
||||
try:
|
||||
if not isinstance(workflow_ids, list):
|
||||
raise ValueError("workflow_ids必须是字符串数组")
|
||||
return self._delete_project_api("/api/lims/order/workflows", workflow_ids)
|
||||
except Exception as e:
|
||||
print(f"❌ 硬删除异常: {str(e)}")
|
||||
return {"code": 0, "message": str(e), "timestamp": int(time.time())}
|
||||
|
||||
# ==================== 项目接口通用方法 ====================
|
||||
|
||||
def _post_project_api(self, endpoint: str, data: Any) -> Dict[str, Any]:
|
||||
"""项目接口通用POST调用
|
||||
|
||||
参数:
|
||||
endpoint: 接口路径(例如 /api/lims/order/skip-titration-steps)
|
||||
data: 请求体中的 data 字段内容
|
||||
|
||||
返回:
|
||||
dict: 服务端响应,失败时返回 {code:0,message,...}
|
||||
"""
|
||||
request_data = {
|
||||
"apiKey": API_CONFIG["api_key"],
|
||||
"requestTime": self.hardware_interface.get_current_time_iso8601(),
|
||||
"data": data
|
||||
}
|
||||
print(f"\n📤 项目POST请求: {self.hardware_interface.host}{endpoint}")
|
||||
print(json.dumps(request_data, indent=4, ensure_ascii=False))
|
||||
try:
|
||||
response = requests.post(
|
||||
f"{self.hardware_interface.host}{endpoint}",
|
||||
json=request_data,
|
||||
headers={"Content-Type": "application/json"},
|
||||
timeout=30
|
||||
)
|
||||
result = response.json()
|
||||
if result.get("code") == 1:
|
||||
print("✅ 请求成功")
|
||||
else:
|
||||
print(f"❌ 请求失败: {result.get('message','未知错误')}")
|
||||
return result
|
||||
except json.JSONDecodeError:
|
||||
print("❌ 非JSON响应")
|
||||
return {"code": 0, "message": "非JSON响应", "timestamp": int(time.time())}
|
||||
except requests.exceptions.Timeout:
|
||||
print("❌ 请求超时")
|
||||
return {"code": 0, "message": "请求超时", "timestamp": int(time.time())}
|
||||
except requests.exceptions.RequestException as e:
|
||||
print(f"❌ 网络异常: {str(e)}")
|
||||
return {"code": 0, "message": str(e), "timestamp": int(time.time())}
|
||||
|
||||
def _delete_project_api(self, endpoint: str, data: Any) -> Dict[str, Any]:
|
||||
"""项目接口通用DELETE调用
|
||||
|
||||
参数:
|
||||
endpoint: 接口路径(例如 /api/lims/order/workflows)
|
||||
data: 请求体中的 data 字段内容
|
||||
|
||||
返回:
|
||||
dict: 服务端响应,失败时返回 {code:0,message,...}
|
||||
"""
|
||||
request_data = {
|
||||
"apiKey": API_CONFIG["api_key"],
|
||||
"requestTime": self.hardware_interface.get_current_time_iso8601(),
|
||||
"data": data
|
||||
}
|
||||
print(f"\n📤 项目DELETE请求: {self.hardware_interface.host}{endpoint}")
|
||||
print(json.dumps(request_data, indent=4, ensure_ascii=False))
|
||||
try:
|
||||
response = requests.delete(
|
||||
f"{self.hardware_interface.host}{endpoint}",
|
||||
json=request_data,
|
||||
headers={"Content-Type": "application/json"},
|
||||
timeout=30
|
||||
)
|
||||
result = response.json()
|
||||
if result.get("code") == 1:
|
||||
print("✅ 请求成功")
|
||||
else:
|
||||
print(f"❌ 请求失败: {result.get('message','未知错误')}")
|
||||
return result
|
||||
except json.JSONDecodeError:
|
||||
print("❌ 非JSON响应")
|
||||
return {"code": 0, "message": "非JSON响应", "timestamp": int(time.time())}
|
||||
except requests.exceptions.Timeout:
|
||||
print("❌ 请求超时")
|
||||
return {"code": 0, "message": "请求超时", "timestamp": int(time.time())}
|
||||
except requests.exceptions.RequestException as e:
|
||||
print(f"❌ 网络异常: {str(e)}")
|
||||
return {"code": 0, "message": str(e), "timestamp": int(time.time())}
|
||||
|
||||
# ==================== 工作流执行核心方法 ====================
|
||||
|
||||
def process_web_workflows(self, web_workflow_json: str) -> List[Dict[str, str]]:
|
||||
@@ -1042,6 +942,76 @@ class BioyondReactionStation(BioyondWorkstation):
|
||||
print(f"错误:处理工作流失败: {e}")
|
||||
return []
|
||||
|
||||
def process_and_execute_workflow(self, workflow_name: str, task_name: str) -> dict:
|
||||
"""
|
||||
一站式处理工作流程:解析网页工作流列表,合并工作流(带参数),然后发布任务
|
||||
|
||||
Args:
|
||||
workflow_name: 合并后的工作流名称
|
||||
task_name: 任务名称
|
||||
|
||||
Returns:
|
||||
任务创建结果
|
||||
"""
|
||||
web_workflow_list = self.get_workflow_sequence()
|
||||
print(f"\n{'='*60}")
|
||||
print(f"📋 处理网页工作流列表: {web_workflow_list}")
|
||||
print(f"{'='*60}")
|
||||
|
||||
web_workflow_json = json.dumps({"web_workflow_list": web_workflow_list})
|
||||
workflows_result = self.process_web_workflows(web_workflow_json)
|
||||
|
||||
if not workflows_result:
|
||||
return self._create_error_result("处理网页工作流列表失败", "process_web_workflows")
|
||||
|
||||
print(f"workflows_result 类型: {type(workflows_result)}")
|
||||
print(f"workflows_result 内容: {workflows_result}")
|
||||
|
||||
workflows_with_params = self._build_workflows_with_parameters(workflows_result)
|
||||
|
||||
merge_data = {
|
||||
"name": workflow_name,
|
||||
"workflows": workflows_with_params
|
||||
}
|
||||
|
||||
# print(f"\n🔄 合并工作流(带参数),名称: {workflow_name}")
|
||||
merged_workflow = self.merge_workflow_with_parameters(json.dumps(merge_data))
|
||||
|
||||
if not merged_workflow:
|
||||
return self._create_error_result("合并工作流失败", "merge_workflow_with_parameters")
|
||||
|
||||
workflow_id = merged_workflow.get("subWorkflows", [{}])[0].get("id", "")
|
||||
# print(f"\n📤 使用工作流创建任务: {workflow_name} (ID: {workflow_id})")
|
||||
|
||||
order_params = [{
|
||||
"orderCode": f"task_{self.hardware_interface.get_current_time_iso8601()}",
|
||||
"orderName": task_name,
|
||||
"workFlowId": workflow_id,
|
||||
"borderNumber": 1,
|
||||
"paramValues": {}
|
||||
}]
|
||||
|
||||
result = self.create_order(json.dumps(order_params))
|
||||
|
||||
if not result:
|
||||
return self._create_error_result("创建任务失败", "create_order")
|
||||
|
||||
# 清空工作流序列和参数,防止下次执行时累积重复
|
||||
self.pending_task_params = []
|
||||
self.clear_workflows() # 清空工作流序列,避免重复累积
|
||||
|
||||
# print(f"\n✅ 任务创建成功: {result}")
|
||||
# print(f"\n✅ 任务创建成功")
|
||||
print(f"{'='*60}\n")
|
||||
|
||||
# 返回结果,包含合并后的工作流数据和订单参数
|
||||
return json.dumps({
|
||||
"success": True,
|
||||
"result": result,
|
||||
"merged_workflow": merged_workflow,
|
||||
"order_params": order_params
|
||||
})
|
||||
|
||||
def _build_workflows_with_parameters(self, workflows_result: list) -> list:
|
||||
"""
|
||||
构建带参数的工作流列表
|
||||
@@ -1241,90 +1211,3 @@ class BioyondReactionStation(BioyondWorkstation):
|
||||
print(f" ❌ 工作流ID验证失败: {e}")
|
||||
print(f" 💡 将重新合并工作流")
|
||||
return False
|
||||
|
||||
def process_and_execute_workflow(self, workflow_name: str, task_name: str) -> dict:
|
||||
"""
|
||||
一站式处理工作流程:解析网页工作流列表,合并工作流(带参数),然后发布任务
|
||||
|
||||
Args:
|
||||
workflow_name: 合并后的工作流名称
|
||||
task_name: 任务名称
|
||||
|
||||
Returns:
|
||||
任务创建结果
|
||||
"""
|
||||
web_workflow_list = self.get_workflow_sequence()
|
||||
print(f"\n{'='*60}")
|
||||
print(f"📋 处理网页工作流列表: {web_workflow_list}")
|
||||
print(f"{'='*60}")
|
||||
|
||||
web_workflow_json = json.dumps({"web_workflow_list": web_workflow_list})
|
||||
workflows_result = self.process_web_workflows(web_workflow_json)
|
||||
|
||||
if not workflows_result:
|
||||
return self._create_error_result("处理网页工作流列表失败", "process_web_workflows")
|
||||
|
||||
print(f"workflows_result 类型: {type(workflows_result)}")
|
||||
print(f"workflows_result 内容: {workflows_result}")
|
||||
|
||||
workflows_with_params = self._build_workflows_with_parameters(workflows_result)
|
||||
|
||||
merge_data = {
|
||||
"name": workflow_name,
|
||||
"workflows": workflows_with_params
|
||||
}
|
||||
|
||||
# print(f"\n🔄 合并工作流(带参数),名称: {workflow_name}")
|
||||
merged_workflow = self.merge_workflow_with_parameters(json.dumps(merge_data))
|
||||
|
||||
if not merged_workflow:
|
||||
return self._create_error_result("合并工作流失败", "merge_workflow_with_parameters")
|
||||
|
||||
workflow_id = merged_workflow.get("subWorkflows", [{}])[0].get("id", "")
|
||||
# print(f"\n📤 使用工作流创建任务: {workflow_name} (ID: {workflow_id})")
|
||||
|
||||
order_params = [{
|
||||
"orderCode": f"task_{self.hardware_interface.get_current_time_iso8601()}",
|
||||
"orderName": task_name,
|
||||
"workFlowId": workflow_id,
|
||||
"borderNumber": 1,
|
||||
"paramValues": {}
|
||||
}]
|
||||
|
||||
result = self.create_order(json.dumps(order_params))
|
||||
|
||||
if not result:
|
||||
return self._create_error_result("创建任务失败", "create_order")
|
||||
|
||||
# 清空工作流序列和参数,防止下次执行时累积重复
|
||||
self.pending_task_params = []
|
||||
self.clear_workflows() # 清空工作流序列,避免重复累积
|
||||
|
||||
# print(f"\n✅ 任务创建成功: {result}")
|
||||
# print(f"\n✅ 任务创建成功")
|
||||
print(f"{'='*60}\n")
|
||||
|
||||
# 返回结果,包含合并后的工作流数据和订单参数
|
||||
return json.dumps({
|
||||
"success": True,
|
||||
"result": result,
|
||||
"merged_workflow": merged_workflow,
|
||||
"order_params": order_params
|
||||
})
|
||||
|
||||
# ==================== 反应器操作接口 ====================
|
||||
|
||||
def skip_titration_steps(self, preintake_id: str) -> Dict[str, Any]:
|
||||
"""跳过当前正在进行的滴定步骤
|
||||
|
||||
Args:
|
||||
preintake_id: 通量ID
|
||||
|
||||
Returns:
|
||||
Dict[str, Any]: 服务器响应,包含状态码、消息和时间戳
|
||||
"""
|
||||
try:
|
||||
return self._post_project_api("/api/lims/order/skip-titration-steps", preintake_id)
|
||||
except Exception as e:
|
||||
print(f"❌ 跳过滴定异常: {str(e)}")
|
||||
return {"code": 0, "message": str(e), "timestamp": int(time.time())}
|
||||
|
||||
@@ -147,7 +147,7 @@ class WorkstationBase(ABC):
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
deck: Optional[Deck],
|
||||
deck: Deck,
|
||||
*args,
|
||||
**kwargs, # 必须有kwargs
|
||||
):
|
||||
@@ -349,5 +349,5 @@ class WorkstationBase(ABC):
|
||||
|
||||
|
||||
class ProtocolNode(WorkstationBase):
|
||||
def __init__(self, protocol_type: List[str], deck: Optional[PLRResource], *args, **kwargs):
|
||||
def __init__(self, deck: Optional[PLRResource], *args, **kwargs):
|
||||
super().__init__(deck, *args, **kwargs)
|
||||
|
||||
@@ -83,96 +83,6 @@ workstation.bioyond_dispensing_station:
|
||||
title: batch_create_diamine_solution_tasks参数
|
||||
type: object
|
||||
type: UniLabJsonCommand
|
||||
auto-brief_step_parameters:
|
||||
feedback: {}
|
||||
goal: {}
|
||||
goal_default:
|
||||
data: null
|
||||
handles: {}
|
||||
placeholder_keys: {}
|
||||
result: {}
|
||||
schema:
|
||||
description: ''
|
||||
properties:
|
||||
feedback: {}
|
||||
goal:
|
||||
properties:
|
||||
data:
|
||||
type: object
|
||||
required:
|
||||
- data
|
||||
type: object
|
||||
result: {}
|
||||
required:
|
||||
- goal
|
||||
title: brief_step_parameters参数
|
||||
type: object
|
||||
type: UniLabJsonCommand
|
||||
auto-compute_experiment_design:
|
||||
feedback: {}
|
||||
goal: {}
|
||||
goal_default:
|
||||
m_tot: '70'
|
||||
ratio: null
|
||||
titration_percent: '0.03'
|
||||
wt_percent: '0.25'
|
||||
handles: {}
|
||||
placeholder_keys: {}
|
||||
result: {}
|
||||
schema:
|
||||
description: ''
|
||||
properties:
|
||||
feedback: {}
|
||||
goal:
|
||||
properties:
|
||||
m_tot:
|
||||
default: '70'
|
||||
type: string
|
||||
ratio:
|
||||
type: object
|
||||
titration_percent:
|
||||
default: '0.03'
|
||||
type: string
|
||||
wt_percent:
|
||||
default: '0.25'
|
||||
type: string
|
||||
required:
|
||||
- ratio
|
||||
type: object
|
||||
result:
|
||||
properties:
|
||||
feeding_order:
|
||||
items: {}
|
||||
title: Feeding Order
|
||||
type: array
|
||||
return_info:
|
||||
title: Return Info
|
||||
type: string
|
||||
solutions:
|
||||
items: {}
|
||||
title: Solutions
|
||||
type: array
|
||||
solvents:
|
||||
additionalProperties: true
|
||||
title: Solvents
|
||||
type: object
|
||||
titration:
|
||||
additionalProperties: true
|
||||
title: Titration
|
||||
type: object
|
||||
required:
|
||||
- solutions
|
||||
- titration
|
||||
- solvents
|
||||
- feeding_order
|
||||
- return_info
|
||||
title: ComputeExperimentDesignReturn
|
||||
type: object
|
||||
required:
|
||||
- goal
|
||||
title: compute_experiment_design参数
|
||||
type: object
|
||||
type: UniLabJsonCommand
|
||||
auto-process_order_finish_report:
|
||||
feedback: {}
|
||||
goal: {}
|
||||
@@ -202,85 +112,6 @@ workstation.bioyond_dispensing_station:
|
||||
title: process_order_finish_report参数
|
||||
type: object
|
||||
type: UniLabJsonCommand
|
||||
auto-project_order_report:
|
||||
feedback: {}
|
||||
goal: {}
|
||||
goal_default:
|
||||
order_id: null
|
||||
handles: {}
|
||||
placeholder_keys: {}
|
||||
result: {}
|
||||
schema:
|
||||
description: ''
|
||||
properties:
|
||||
feedback: {}
|
||||
goal:
|
||||
properties:
|
||||
order_id:
|
||||
type: string
|
||||
required:
|
||||
- order_id
|
||||
type: object
|
||||
result: {}
|
||||
required:
|
||||
- goal
|
||||
title: project_order_report参数
|
||||
type: object
|
||||
type: UniLabJsonCommand
|
||||
auto-query_resource_by_name:
|
||||
feedback: {}
|
||||
goal: {}
|
||||
goal_default:
|
||||
material_name: null
|
||||
handles: {}
|
||||
placeholder_keys: {}
|
||||
result: {}
|
||||
schema:
|
||||
description: ''
|
||||
properties:
|
||||
feedback: {}
|
||||
goal:
|
||||
properties:
|
||||
material_name:
|
||||
type: string
|
||||
required:
|
||||
- material_name
|
||||
type: object
|
||||
result: {}
|
||||
required:
|
||||
- goal
|
||||
title: query_resource_by_name参数
|
||||
type: object
|
||||
type: UniLabJsonCommand
|
||||
auto-transfer_materials_to_reaction_station:
|
||||
feedback: {}
|
||||
goal: {}
|
||||
goal_default:
|
||||
target_device_id: null
|
||||
transfer_groups: null
|
||||
handles: {}
|
||||
placeholder_keys: {}
|
||||
result: {}
|
||||
schema:
|
||||
description: ''
|
||||
properties:
|
||||
feedback: {}
|
||||
goal:
|
||||
properties:
|
||||
target_device_id:
|
||||
type: string
|
||||
transfer_groups:
|
||||
type: array
|
||||
required:
|
||||
- target_device_id
|
||||
- transfer_groups
|
||||
type: object
|
||||
result: {}
|
||||
required:
|
||||
- goal
|
||||
title: transfer_materials_to_reaction_station参数
|
||||
type: object
|
||||
type: UniLabJsonCommand
|
||||
auto-wait_for_multiple_orders_and_get_reports:
|
||||
feedback: {}
|
||||
goal: {}
|
||||
@@ -313,31 +144,6 @@ workstation.bioyond_dispensing_station:
|
||||
title: wait_for_multiple_orders_and_get_reports参数
|
||||
type: object
|
||||
type: UniLabJsonCommand
|
||||
auto-workflow_sample_locations:
|
||||
feedback: {}
|
||||
goal: {}
|
||||
goal_default:
|
||||
workflow_id: null
|
||||
handles: {}
|
||||
placeholder_keys: {}
|
||||
result: {}
|
||||
schema:
|
||||
description: ''
|
||||
properties:
|
||||
feedback: {}
|
||||
goal:
|
||||
properties:
|
||||
workflow_id:
|
||||
type: string
|
||||
required:
|
||||
- workflow_id
|
||||
type: object
|
||||
result: {}
|
||||
required:
|
||||
- goal
|
||||
title: workflow_sample_locations参数
|
||||
type: object
|
||||
type: UniLabJsonCommand
|
||||
create_90_10_vial_feeding_task:
|
||||
feedback: {}
|
||||
goal:
|
||||
|
||||
@@ -5,96 +5,6 @@ bioyond_dispensing_station:
|
||||
- bioyond_dispensing_station
|
||||
class:
|
||||
action_value_mappings:
|
||||
auto-brief_step_parameters:
|
||||
feedback: {}
|
||||
goal: {}
|
||||
goal_default:
|
||||
data: null
|
||||
handles: {}
|
||||
placeholder_keys: {}
|
||||
result: {}
|
||||
schema:
|
||||
description: ''
|
||||
properties:
|
||||
feedback: {}
|
||||
goal:
|
||||
properties:
|
||||
data:
|
||||
type: object
|
||||
required:
|
||||
- data
|
||||
type: object
|
||||
result: {}
|
||||
required:
|
||||
- goal
|
||||
title: brief_step_parameters参数
|
||||
type: object
|
||||
type: UniLabJsonCommand
|
||||
auto-compute_experiment_design:
|
||||
feedback: {}
|
||||
goal: {}
|
||||
goal_default:
|
||||
m_tot: '70'
|
||||
ratio: null
|
||||
titration_percent: '0.03'
|
||||
wt_percent: '0.25'
|
||||
handles: {}
|
||||
placeholder_keys: {}
|
||||
result: {}
|
||||
schema:
|
||||
description: ''
|
||||
properties:
|
||||
feedback: {}
|
||||
goal:
|
||||
properties:
|
||||
m_tot:
|
||||
default: '70'
|
||||
type: string
|
||||
ratio:
|
||||
type: object
|
||||
titration_percent:
|
||||
default: '0.03'
|
||||
type: string
|
||||
wt_percent:
|
||||
default: '0.25'
|
||||
type: string
|
||||
required:
|
||||
- ratio
|
||||
type: object
|
||||
result:
|
||||
properties:
|
||||
feeding_order:
|
||||
items: {}
|
||||
title: Feeding Order
|
||||
type: array
|
||||
return_info:
|
||||
title: Return Info
|
||||
type: string
|
||||
solutions:
|
||||
items: {}
|
||||
title: Solutions
|
||||
type: array
|
||||
solvents:
|
||||
additionalProperties: true
|
||||
title: Solvents
|
||||
type: object
|
||||
titration:
|
||||
additionalProperties: true
|
||||
title: Titration
|
||||
type: object
|
||||
required:
|
||||
- solutions
|
||||
- titration
|
||||
- solvents
|
||||
- feeding_order
|
||||
- return_info
|
||||
title: ComputeExperimentDesignReturn
|
||||
type: object
|
||||
required:
|
||||
- goal
|
||||
title: compute_experiment_design参数
|
||||
type: object
|
||||
type: UniLabJsonCommand
|
||||
auto-process_order_finish_report:
|
||||
feedback: {}
|
||||
goal: {}
|
||||
@@ -124,110 +34,6 @@ bioyond_dispensing_station:
|
||||
title: process_order_finish_report参数
|
||||
type: object
|
||||
type: UniLabJsonCommand
|
||||
auto-project_order_report:
|
||||
feedback: {}
|
||||
goal: {}
|
||||
goal_default:
|
||||
order_id: null
|
||||
handles: {}
|
||||
placeholder_keys: {}
|
||||
result: {}
|
||||
schema:
|
||||
description: ''
|
||||
properties:
|
||||
feedback: {}
|
||||
goal:
|
||||
properties:
|
||||
order_id:
|
||||
type: string
|
||||
required:
|
||||
- order_id
|
||||
type: object
|
||||
result: {}
|
||||
required:
|
||||
- goal
|
||||
title: project_order_report参数
|
||||
type: object
|
||||
type: UniLabJsonCommand
|
||||
auto-query_resource_by_name:
|
||||
feedback: {}
|
||||
goal: {}
|
||||
goal_default:
|
||||
material_name: null
|
||||
handles: {}
|
||||
placeholder_keys: {}
|
||||
result: {}
|
||||
schema:
|
||||
description: ''
|
||||
properties:
|
||||
feedback: {}
|
||||
goal:
|
||||
properties:
|
||||
material_name:
|
||||
type: string
|
||||
required:
|
||||
- material_name
|
||||
type: object
|
||||
result: {}
|
||||
required:
|
||||
- goal
|
||||
title: query_resource_by_name参数
|
||||
type: object
|
||||
type: UniLabJsonCommand
|
||||
auto-transfer_materials_to_reaction_station:
|
||||
feedback: {}
|
||||
goal: {}
|
||||
goal_default:
|
||||
target_device_id: null
|
||||
transfer_groups: null
|
||||
handles: {}
|
||||
placeholder_keys: {}
|
||||
result: {}
|
||||
schema:
|
||||
description: ''
|
||||
properties:
|
||||
feedback: {}
|
||||
goal:
|
||||
properties:
|
||||
target_device_id:
|
||||
type: string
|
||||
transfer_groups:
|
||||
type: array
|
||||
required:
|
||||
- target_device_id
|
||||
- transfer_groups
|
||||
type: object
|
||||
result: {}
|
||||
required:
|
||||
- goal
|
||||
title: transfer_materials_to_reaction_station参数
|
||||
type: object
|
||||
type: UniLabJsonCommand
|
||||
auto-workflow_sample_locations:
|
||||
feedback: {}
|
||||
goal: {}
|
||||
goal_default:
|
||||
workflow_id: null
|
||||
handles: {}
|
||||
placeholder_keys: {}
|
||||
result: {}
|
||||
schema:
|
||||
description: ''
|
||||
properties:
|
||||
feedback: {}
|
||||
goal:
|
||||
properties:
|
||||
workflow_id:
|
||||
type: string
|
||||
required:
|
||||
- workflow_id
|
||||
type: object
|
||||
result: {}
|
||||
required:
|
||||
- goal
|
||||
title: workflow_sample_locations参数
|
||||
type: object
|
||||
type: UniLabJsonCommand
|
||||
batch_create_90_10_vial_feeding_tasks:
|
||||
feedback: {}
|
||||
goal:
|
||||
@@ -620,56 +426,6 @@ bioyond_dispensing_station:
|
||||
title: DispenStationSolnPrep
|
||||
type: object
|
||||
type: DispenStationSolnPrep
|
||||
transfer_materials_to_reaction_station:
|
||||
feedback: {}
|
||||
goal:
|
||||
target_device_id: target_device_id
|
||||
transfer_groups: transfer_groups
|
||||
goal_default:
|
||||
target_device_id: ''
|
||||
transfer_groups: ''
|
||||
handles: {}
|
||||
placeholder_keys:
|
||||
target_device_id: unilabos_devices
|
||||
result: {}
|
||||
schema:
|
||||
description: 将配液站完成的物料(溶液、样品等)转移到指定反应站的堆栈库位。支持配置多组转移任务,每组包含物料名称、目标堆栈和目标库位。
|
||||
properties:
|
||||
feedback: {}
|
||||
goal:
|
||||
properties:
|
||||
target_device_id:
|
||||
description: 目标反应站设备ID(从设备列表中选择,所有转移组都使用同一个目标设备)
|
||||
type: string
|
||||
transfer_groups:
|
||||
description: 转移任务组列表,每组包含物料名称、目标堆栈和目标库位,可以添加多组
|
||||
items:
|
||||
properties:
|
||||
materials:
|
||||
description: 物料名称(手动输入,系统将通过RPC查询验证)
|
||||
type: string
|
||||
target_sites:
|
||||
description: 目标库位(手动输入,如"A01")
|
||||
type: string
|
||||
target_stack:
|
||||
description: 目标堆栈名称(手动输入,如"堆栈1左")
|
||||
type: string
|
||||
required:
|
||||
- materials
|
||||
- target_stack
|
||||
- target_sites
|
||||
type: object
|
||||
type: array
|
||||
required:
|
||||
- target_device_id
|
||||
- transfer_groups
|
||||
type: object
|
||||
result: {}
|
||||
required:
|
||||
- goal
|
||||
title: transfer_materials_to_reaction_station参数
|
||||
type: object
|
||||
type: UniLabJsonCommand
|
||||
wait_for_multiple_orders_and_get_reports:
|
||||
feedback: {}
|
||||
goal:
|
||||
|
||||
@@ -61,9 +61,6 @@ camera:
|
||||
device_id:
|
||||
default: video_publisher
|
||||
type: string
|
||||
device_uuid:
|
||||
default: ''
|
||||
type: string
|
||||
period:
|
||||
default: 0.1
|
||||
type: number
|
||||
|
||||
@@ -4497,6 +4497,9 @@ liquid_handler:
|
||||
simulator:
|
||||
default: false
|
||||
type: boolean
|
||||
total_height:
|
||||
default: 310
|
||||
type: number
|
||||
required:
|
||||
- backend
|
||||
- deck
|
||||
|
||||
@@ -4,215 +4,6 @@ reaction_station.bioyond:
|
||||
- reaction_station_bioyond
|
||||
class:
|
||||
action_value_mappings:
|
||||
auto-create_order:
|
||||
feedback: {}
|
||||
goal: {}
|
||||
goal_default:
|
||||
json_str: null
|
||||
handles: {}
|
||||
placeholder_keys: {}
|
||||
result: {}
|
||||
schema:
|
||||
description: ''
|
||||
properties:
|
||||
feedback: {}
|
||||
goal:
|
||||
properties:
|
||||
json_str:
|
||||
type: string
|
||||
required:
|
||||
- json_str
|
||||
type: object
|
||||
result: {}
|
||||
required:
|
||||
- goal
|
||||
title: create_order参数
|
||||
type: object
|
||||
type: UniLabJsonCommand
|
||||
auto-hard_delete_merged_workflows:
|
||||
feedback: {}
|
||||
goal: {}
|
||||
goal_default:
|
||||
workflow_ids: null
|
||||
handles: {}
|
||||
placeholder_keys: {}
|
||||
result: {}
|
||||
schema:
|
||||
description: ''
|
||||
properties:
|
||||
feedback: {}
|
||||
goal:
|
||||
properties:
|
||||
workflow_ids:
|
||||
items:
|
||||
type: string
|
||||
type: array
|
||||
required:
|
||||
- workflow_ids
|
||||
type: object
|
||||
result: {}
|
||||
required:
|
||||
- goal
|
||||
title: hard_delete_merged_workflows参数
|
||||
type: object
|
||||
type: UniLabJsonCommand
|
||||
auto-merge_workflow_with_parameters:
|
||||
feedback: {}
|
||||
goal: {}
|
||||
goal_default:
|
||||
json_str: null
|
||||
handles: {}
|
||||
placeholder_keys: {}
|
||||
result: {}
|
||||
schema:
|
||||
description: ''
|
||||
properties:
|
||||
feedback: {}
|
||||
goal:
|
||||
properties:
|
||||
json_str:
|
||||
type: string
|
||||
required:
|
||||
- json_str
|
||||
type: object
|
||||
result: {}
|
||||
required:
|
||||
- goal
|
||||
title: merge_workflow_with_parameters参数
|
||||
type: object
|
||||
type: UniLabJsonCommand
|
||||
auto-process_temperature_cutoff_report:
|
||||
feedback: {}
|
||||
goal: {}
|
||||
goal_default:
|
||||
report_request: null
|
||||
handles: {}
|
||||
placeholder_keys: {}
|
||||
result: {}
|
||||
schema:
|
||||
description: ''
|
||||
properties:
|
||||
feedback: {}
|
||||
goal:
|
||||
properties:
|
||||
report_request:
|
||||
type: string
|
||||
required:
|
||||
- report_request
|
||||
type: object
|
||||
result: {}
|
||||
required:
|
||||
- goal
|
||||
title: process_temperature_cutoff_report参数
|
||||
type: object
|
||||
type: UniLabJsonCommand
|
||||
auto-process_web_workflows:
|
||||
feedback: {}
|
||||
goal: {}
|
||||
goal_default:
|
||||
web_workflow_json: null
|
||||
handles: {}
|
||||
placeholder_keys: {}
|
||||
result: {}
|
||||
schema:
|
||||
description: ''
|
||||
properties:
|
||||
feedback: {}
|
||||
goal:
|
||||
properties:
|
||||
web_workflow_json:
|
||||
type: string
|
||||
required:
|
||||
- web_workflow_json
|
||||
type: object
|
||||
result: {}
|
||||
required:
|
||||
- goal
|
||||
title: process_web_workflows参数
|
||||
type: object
|
||||
type: UniLabJsonCommand
|
||||
auto-skip_titration_steps:
|
||||
feedback: {}
|
||||
goal: {}
|
||||
goal_default:
|
||||
preintake_id: null
|
||||
handles: {}
|
||||
placeholder_keys: {}
|
||||
result: {}
|
||||
schema:
|
||||
description: ''
|
||||
properties:
|
||||
feedback: {}
|
||||
goal:
|
||||
properties:
|
||||
preintake_id:
|
||||
type: string
|
||||
required:
|
||||
- preintake_id
|
||||
type: object
|
||||
result: {}
|
||||
required:
|
||||
- goal
|
||||
title: skip_titration_steps参数
|
||||
type: object
|
||||
type: UniLabJsonCommand
|
||||
auto-wait_for_multiple_orders_and_get_reports:
|
||||
feedback: {}
|
||||
goal: {}
|
||||
goal_default:
|
||||
batch_create_result: null
|
||||
check_interval: 10
|
||||
timeout: 7200
|
||||
handles: {}
|
||||
placeholder_keys: {}
|
||||
result: {}
|
||||
schema:
|
||||
description: ''
|
||||
properties:
|
||||
feedback: {}
|
||||
goal:
|
||||
properties:
|
||||
batch_create_result:
|
||||
type: string
|
||||
check_interval:
|
||||
default: 10
|
||||
type: integer
|
||||
timeout:
|
||||
default: 7200
|
||||
type: integer
|
||||
required: []
|
||||
type: object
|
||||
result: {}
|
||||
required:
|
||||
- goal
|
||||
title: wait_for_multiple_orders_and_get_reports参数
|
||||
type: object
|
||||
type: UniLabJsonCommand
|
||||
auto-workflow_step_query:
|
||||
feedback: {}
|
||||
goal: {}
|
||||
goal_default:
|
||||
workflow_id: null
|
||||
handles: {}
|
||||
placeholder_keys: {}
|
||||
result: {}
|
||||
schema:
|
||||
description: ''
|
||||
properties:
|
||||
feedback: {}
|
||||
goal:
|
||||
properties:
|
||||
workflow_id:
|
||||
type: string
|
||||
required:
|
||||
- workflow_id
|
||||
type: object
|
||||
result: {}
|
||||
required:
|
||||
- goal
|
||||
title: workflow_step_query参数
|
||||
type: object
|
||||
type: UniLabJsonCommand
|
||||
drip_back:
|
||||
feedback: {}
|
||||
goal:
|
||||
@@ -733,7 +524,19 @@ reaction_station.bioyond:
|
||||
module: unilabos.devices.workstation.bioyond_studio.reaction_station:BioyondReactionStation
|
||||
protocol_type: []
|
||||
status_types:
|
||||
workflow_sequence: String
|
||||
all_workflows: dict
|
||||
average_viscosity: float
|
||||
bioyond_status: dict
|
||||
force: float
|
||||
in_temperature: float
|
||||
out_temperature: float
|
||||
pt100_temperature: float
|
||||
sensor_average_temperature: float
|
||||
setting_temperature: float
|
||||
speed: float
|
||||
target_temperature: float
|
||||
viscosity: float
|
||||
workstation_status: dict
|
||||
type: python
|
||||
config_info: []
|
||||
description: Bioyond反应站
|
||||
@@ -745,19 +548,21 @@ reaction_station.bioyond:
|
||||
config:
|
||||
type: object
|
||||
deck:
|
||||
type: string
|
||||
protocol_type:
|
||||
type: string
|
||||
type: object
|
||||
required: []
|
||||
type: object
|
||||
data:
|
||||
properties:
|
||||
workflow_sequence:
|
||||
items:
|
||||
type: string
|
||||
type: array
|
||||
all_workflows:
|
||||
type: object
|
||||
bioyond_status:
|
||||
type: object
|
||||
workstation_status:
|
||||
type: object
|
||||
required:
|
||||
- workflow_sequence
|
||||
- bioyond_status
|
||||
- all_workflows
|
||||
- workstation_status
|
||||
type: object
|
||||
version: 1.0.0
|
||||
reaction_station.reactor:
|
||||
@@ -765,34 +570,19 @@ reaction_station.reactor:
|
||||
- reactor
|
||||
- reaction_station_bioyond
|
||||
class:
|
||||
action_value_mappings:
|
||||
auto-update_metrics:
|
||||
feedback: {}
|
||||
goal: {}
|
||||
goal_default:
|
||||
payload: null
|
||||
handles: {}
|
||||
placeholder_keys: {}
|
||||
result: {}
|
||||
schema:
|
||||
description: ''
|
||||
properties:
|
||||
feedback: {}
|
||||
goal:
|
||||
properties:
|
||||
payload:
|
||||
type: object
|
||||
required:
|
||||
- payload
|
||||
type: object
|
||||
result: {}
|
||||
required:
|
||||
- goal
|
||||
title: update_metrics参数
|
||||
type: object
|
||||
type: UniLabJsonCommand
|
||||
action_value_mappings: {}
|
||||
module: unilabos.devices.workstation.bioyond_studio.reaction_station:BioyondReactor
|
||||
status_types: {}
|
||||
status_types:
|
||||
average_viscosity: float
|
||||
force: float
|
||||
in_temperature: float
|
||||
out_temperature: float
|
||||
pt100_temperature: float
|
||||
sensor_average_temperature: float
|
||||
setting_temperature: float
|
||||
speed: float
|
||||
target_temperature: float
|
||||
viscosity: float
|
||||
type: python
|
||||
config_info: []
|
||||
description: 反应站子设备-反应器
|
||||
@@ -803,14 +593,30 @@ reaction_station.reactor:
|
||||
properties:
|
||||
config:
|
||||
type: object
|
||||
deck:
|
||||
type: string
|
||||
protocol_type:
|
||||
type: string
|
||||
required: []
|
||||
type: object
|
||||
data:
|
||||
properties: {}
|
||||
properties:
|
||||
average_viscosity:
|
||||
type: number
|
||||
force:
|
||||
type: number
|
||||
in_temperature:
|
||||
type: number
|
||||
out_temperature:
|
||||
type: number
|
||||
pt100_temperature:
|
||||
type: number
|
||||
sensor_average_temperature:
|
||||
type: number
|
||||
setting_temperature:
|
||||
type: number
|
||||
speed:
|
||||
type: number
|
||||
target_temperature:
|
||||
type: number
|
||||
viscosity:
|
||||
type: number
|
||||
required: []
|
||||
type: object
|
||||
version: 1.0.0
|
||||
|
||||
@@ -6036,12 +6036,7 @@ workstation:
|
||||
properties:
|
||||
deck:
|
||||
type: string
|
||||
protocol_type:
|
||||
items:
|
||||
type: string
|
||||
type: array
|
||||
required:
|
||||
- protocol_type
|
||||
- deck
|
||||
type: object
|
||||
data:
|
||||
|
||||
@@ -453,7 +453,7 @@ class Registry:
|
||||
return status_schema
|
||||
|
||||
def _generate_unilab_json_command_schema(
|
||||
self, method_args: List[Dict[str, Any]], method_name: str, return_annotation: Any = None
|
||||
self, method_args: List[Dict[str, Any]], method_name: str
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
根据UniLabJsonCommand方法信息生成JSON Schema,暂不支持嵌套类型
|
||||
@@ -461,7 +461,6 @@ class Registry:
|
||||
Args:
|
||||
method_args: 方法信息字典,包含args等
|
||||
method_name: 方法名称
|
||||
return_annotation: 返回类型注解,用于生成result schema(仅支持TypedDict)
|
||||
|
||||
Returns:
|
||||
JSON Schema格式的参数schema
|
||||
@@ -490,68 +489,14 @@ class Registry:
|
||||
if param_required:
|
||||
schema["required"].append(param_name)
|
||||
|
||||
# 生成result schema(仅当return_annotation是TypedDict时)
|
||||
result_schema = {}
|
||||
if return_annotation is not None and self._is_typed_dict(return_annotation):
|
||||
result_schema = self._generate_typed_dict_result_schema(return_annotation)
|
||||
|
||||
return {
|
||||
"title": f"{method_name}参数",
|
||||
"description": f"",
|
||||
"type": "object",
|
||||
"properties": {"goal": schema, "feedback": {}, "result": result_schema},
|
||||
"properties": {"goal": schema, "feedback": {}, "result": {}},
|
||||
"required": ["goal"],
|
||||
}
|
||||
|
||||
def _is_typed_dict(self, annotation: Any) -> bool:
|
||||
"""
|
||||
检查类型注解是否是TypedDict
|
||||
|
||||
Args:
|
||||
annotation: 类型注解对象
|
||||
|
||||
Returns:
|
||||
是否为TypedDict
|
||||
"""
|
||||
if annotation is None or annotation == inspect.Parameter.empty:
|
||||
return False
|
||||
|
||||
# 使用 typing_extensions.is_typeddict 进行检查(Python < 3.12 兼容)
|
||||
try:
|
||||
from typing_extensions import is_typeddict
|
||||
|
||||
return is_typeddict(annotation)
|
||||
except ImportError:
|
||||
# 回退方案:检查 TypedDict 特有的属性
|
||||
if isinstance(annotation, type):
|
||||
return hasattr(annotation, "__required_keys__") and hasattr(annotation, "__optional_keys__")
|
||||
return False
|
||||
|
||||
def _generate_typed_dict_result_schema(self, return_annotation: Any) -> Dict[str, Any]:
|
||||
"""
|
||||
根据TypedDict类型生成result的JSON Schema
|
||||
|
||||
Args:
|
||||
return_annotation: TypedDict类型注解
|
||||
|
||||
Returns:
|
||||
JSON Schema格式的result schema
|
||||
"""
|
||||
if not self._is_typed_dict(return_annotation):
|
||||
return {}
|
||||
|
||||
try:
|
||||
from msgcenterpy.instances.typed_dict_instance import TypedDictMessageInstance
|
||||
|
||||
result_schema = TypedDictMessageInstance.get_json_schema_from_typed_dict(return_annotation)
|
||||
return result_schema
|
||||
except ImportError:
|
||||
logger.warning("[UniLab Registry] msgcenterpy未安装,无法生成TypedDict的result schema")
|
||||
return {}
|
||||
except Exception as e:
|
||||
logger.warning(f"[UniLab Registry] 生成TypedDict result schema失败: {e}")
|
||||
return {}
|
||||
|
||||
def _add_builtin_actions(self, device_config: Dict[str, Any], device_id: str):
|
||||
"""
|
||||
为设备配置添加内置的执行驱动命令动作
|
||||
@@ -632,15 +577,9 @@ class Registry:
|
||||
if "init_param_schema" not in device_config:
|
||||
device_config["init_param_schema"] = {}
|
||||
if "class" in device_config:
|
||||
if (
|
||||
"status_types" not in device_config["class"]
|
||||
or device_config["class"]["status_types"] is None
|
||||
):
|
||||
if "status_types" not in device_config["class"] or device_config["class"]["status_types"] is None:
|
||||
device_config["class"]["status_types"] = {}
|
||||
if (
|
||||
"action_value_mappings" not in device_config["class"]
|
||||
or device_config["class"]["action_value_mappings"] is None
|
||||
):
|
||||
if "action_value_mappings" not in device_config["class"] or device_config["class"]["action_value_mappings"] is None:
|
||||
device_config["class"]["action_value_mappings"] = {}
|
||||
enhanced_info = {}
|
||||
if complete_registry:
|
||||
@@ -692,9 +631,7 @@ class Registry:
|
||||
"goal": {},
|
||||
"feedback": {},
|
||||
"result": {},
|
||||
"schema": self._generate_unilab_json_command_schema(
|
||||
v["args"], k, v.get("return_annotation")
|
||||
),
|
||||
"schema": self._generate_unilab_json_command_schema(v["args"], k),
|
||||
"goal_default": {i["name"]: i["default"] for i in v["args"]},
|
||||
"handles": [],
|
||||
"placeholder_keys": {
|
||||
|
||||
@@ -2,7 +2,7 @@ container:
|
||||
category:
|
||||
- container
|
||||
class:
|
||||
module: unilabos.resources.container:get_regular_container
|
||||
module: unilabos.resources.container:RegularContainer
|
||||
type: pylabrobot
|
||||
description: regular organic container
|
||||
handles:
|
||||
|
||||
@@ -15,9 +15,8 @@ from unilabos.resources.bioyond.warehouses import (
|
||||
bioyond_warehouse_5x1x1,
|
||||
bioyond_warehouse_1x8x4,
|
||||
bioyond_warehouse_reagent_storage,
|
||||
# bioyond_warehouse_liquid_preparation,
|
||||
bioyond_warehouse_liquid_preparation,
|
||||
bioyond_warehouse_tipbox_storage, # 新增:Tip盒堆栈
|
||||
bioyond_warehouse_density_vial,
|
||||
)
|
||||
|
||||
|
||||
@@ -44,20 +43,17 @@ class BIOYOND_PolymerReactionStation_Deck(Deck):
|
||||
"堆栈1左": bioyond_warehouse_1x4x4("堆栈1左"), # 左侧堆栈: A01~D04
|
||||
"堆栈1右": bioyond_warehouse_1x4x4_right("堆栈1右"), # 右侧堆栈: A05~D08
|
||||
"站内试剂存放堆栈": bioyond_warehouse_reagent_storage("站内试剂存放堆栈"), # A01~A02
|
||||
# "移液站内10%分装液体准备仓库": bioyond_warehouse_liquid_preparation("移液站内10%分装液体准备仓库"), # A01~B04
|
||||
"站内Tip盒堆栈": bioyond_warehouse_tipbox_storage("站内Tip盒堆栈"), # A01~B03, 存放枪头盒.
|
||||
"测量小瓶仓库(测密度)": bioyond_warehouse_density_vial("测量小瓶仓库(测密度)"), # A01~B03
|
||||
"移液站内10%分装液体准备仓库": bioyond_warehouse_liquid_preparation("移液站内10%分装液体准备仓库"), # A01~B04
|
||||
"站内Tip盒堆栈": bioyond_warehouse_tipbox_storage("站内Tip盒堆栈"), # A01~B03, 存放枪头盒
|
||||
}
|
||||
self.warehouse_locations = {
|
||||
"堆栈1左": Coordinate(0.0, 430.0, 0.0), # 左侧位置
|
||||
"堆栈1右": Coordinate(2500.0, 430.0, 0.0), # 右侧位置
|
||||
"站内试剂存放堆栈": Coordinate(640.0, 480.0, 0.0),
|
||||
# "移液站内10%分装液体准备仓库": Coordinate(1200.0, 600.0, 0.0),
|
||||
"移液站内10%分装液体准备仓库": Coordinate(1200.0, 600.0, 0.0),
|
||||
"站内Tip盒堆栈": Coordinate(300.0, 150.0, 0.0),
|
||||
"测量小瓶仓库(测密度)": Coordinate(922.0, 552.0, 0.0),
|
||||
}
|
||||
self.warehouses["站内试剂存放堆栈"].rotation = Rotation(z=90)
|
||||
self.warehouses["测量小瓶仓库(测密度)"].rotation = Rotation(z=270)
|
||||
|
||||
for warehouse_name, warehouse in self.warehouses.items():
|
||||
self.assign_child_resource(warehouse, location=self.warehouse_locations[warehouse_name])
|
||||
|
||||
@@ -1,6 +1,5 @@
|
||||
from unilabos.resources.warehouse import WareHouse, warehouse_factory
|
||||
|
||||
# ================ 反应站相关堆栈 ================
|
||||
|
||||
def bioyond_warehouse_1x4x4(name: str) -> WareHouse:
|
||||
"""创建BioYond 4x4x1仓库 (左侧堆栈: A01~D04)
|
||||
@@ -27,6 +26,7 @@ def bioyond_warehouse_1x4x4(name: str) -> WareHouse:
|
||||
layout="row-major", # ⭐ 改为行优先排序
|
||||
)
|
||||
|
||||
|
||||
def bioyond_warehouse_1x4x4_right(name: str) -> WareHouse:
|
||||
"""创建BioYond 4x4x1仓库 (右侧堆栈: A05~D08)"""
|
||||
return warehouse_factory(
|
||||
@@ -45,35 +45,15 @@ def bioyond_warehouse_1x4x4_right(name: str) -> WareHouse:
|
||||
layout="row-major", # ⭐ 改为行优先排序
|
||||
)
|
||||
|
||||
def bioyond_warehouse_density_vial(name: str) -> WareHouse:
|
||||
"""创建测量小瓶仓库(测密度) A01~B03"""
|
||||
return warehouse_factory(
|
||||
name=name,
|
||||
num_items_x=3, # 3列(01-03)
|
||||
num_items_y=2, # 2行(A-B)
|
||||
num_items_z=1, # 1层
|
||||
dx=10.0,
|
||||
dy=10.0,
|
||||
dz=10.0,
|
||||
item_dx=40.0,
|
||||
item_dy=40.0,
|
||||
item_dz=50.0,
|
||||
# 用更小的 resource_size 来表现 "小点的孔位"
|
||||
resource_size_x=30.0,
|
||||
resource_size_y=30.0,
|
||||
resource_size_z=12.0,
|
||||
category="warehouse",
|
||||
col_offset=0,
|
||||
layout="row-major",
|
||||
)
|
||||
|
||||
def bioyond_warehouse_reagent_storage(name: str) -> WareHouse:
|
||||
"""创建BioYond站内试剂存放堆栈(A01~A02, 1行×2列)"""
|
||||
|
||||
def bioyond_warehouse_1x4x2(name: str) -> WareHouse:
|
||||
"""创建BioYond 4x1x2仓库"""
|
||||
return warehouse_factory(
|
||||
name=name,
|
||||
num_items_x=2, # 2列(01-02)
|
||||
num_items_y=1, # 1行(A)
|
||||
num_items_z=1, # 1层
|
||||
num_items_x=1,
|
||||
num_items_y=4,
|
||||
num_items_z=2,
|
||||
dx=10.0,
|
||||
dy=10.0,
|
||||
dz=10.0,
|
||||
@@ -81,46 +61,9 @@ def bioyond_warehouse_reagent_storage(name: str) -> WareHouse:
|
||||
item_dy=96.0,
|
||||
item_dz=120.0,
|
||||
category="warehouse",
|
||||
removed_positions=None
|
||||
)
|
||||
|
||||
def bioyond_warehouse_tipbox_storage(name: str) -> WareHouse:
|
||||
"""创建BioYond站内Tip盒堆栈(A01~B03),用于存放枪头盒"""
|
||||
return warehouse_factory(
|
||||
name=name,
|
||||
num_items_x=3, # 3列(01-03)
|
||||
num_items_y=2, # 2行(A-B)
|
||||
num_items_z=1, # 1层
|
||||
dx=10.0,
|
||||
dy=10.0,
|
||||
dz=10.0,
|
||||
item_dx=137.0,
|
||||
item_dy=96.0,
|
||||
item_dz=120.0,
|
||||
category="warehouse",
|
||||
col_offset=0,
|
||||
layout="row-major",
|
||||
)
|
||||
|
||||
def bioyond_warehouse_liquid_preparation(name: str) -> WareHouse:
|
||||
"""已弃用,创建BioYond移液站内10%分装液体准备仓库(A01~B04)"""
|
||||
return warehouse_factory(
|
||||
name=name,
|
||||
num_items_x=4, # 4列(01-04)
|
||||
num_items_y=2, # 2行(A-B)
|
||||
num_items_z=1, # 1层
|
||||
dx=10.0,
|
||||
dy=10.0,
|
||||
dz=10.0,
|
||||
item_dx=137.0,
|
||||
item_dy=96.0,
|
||||
item_dz=120.0,
|
||||
category="warehouse",
|
||||
col_offset=0,
|
||||
layout="row-major",
|
||||
)
|
||||
|
||||
# ================ 配液站相关堆栈 ================
|
||||
|
||||
def bioyond_warehouse_reagent_stack(name: str) -> WareHouse:
|
||||
"""创建BioYond 试剂堆栈 2x4x1 (2行×4列: A01-A04, B01-B04)
|
||||
|
||||
@@ -145,28 +88,8 @@ def bioyond_warehouse_reagent_stack(name: str) -> WareHouse:
|
||||
)
|
||||
|
||||
# 定义bioyond的堆栈
|
||||
|
||||
# =================== Other ===================
|
||||
|
||||
def bioyond_warehouse_1x4x2(name: str) -> WareHouse:
|
||||
"""创建BioYond 4x2x1仓库"""
|
||||
return warehouse_factory(
|
||||
name=name,
|
||||
num_items_x=1,
|
||||
num_items_y=4,
|
||||
num_items_z=2,
|
||||
dx=10.0,
|
||||
dy=10.0,
|
||||
dz=10.0,
|
||||
item_dx=137.0,
|
||||
item_dy=96.0,
|
||||
item_dz=120.0,
|
||||
category="warehouse",
|
||||
removed_positions=None
|
||||
)
|
||||
|
||||
def bioyond_warehouse_1x2x2(name: str) -> WareHouse:
|
||||
"""创建BioYond 1x2x2仓库"""
|
||||
"""创建BioYond 4x1x4仓库"""
|
||||
return warehouse_factory(
|
||||
name=name,
|
||||
num_items_x=1,
|
||||
@@ -180,9 +103,8 @@ def bioyond_warehouse_1x2x2(name: str) -> WareHouse:
|
||||
item_dz=120.0,
|
||||
category="warehouse",
|
||||
)
|
||||
|
||||
def bioyond_warehouse_10x1x1(name: str) -> WareHouse:
|
||||
"""创建BioYond 10x1x1仓库"""
|
||||
"""创建BioYond 4x1x4仓库"""
|
||||
return warehouse_factory(
|
||||
name=name,
|
||||
num_items_x=10,
|
||||
@@ -196,9 +118,8 @@ def bioyond_warehouse_10x1x1(name: str) -> WareHouse:
|
||||
item_dz=120.0,
|
||||
category="warehouse",
|
||||
)
|
||||
|
||||
def bioyond_warehouse_1x3x3(name: str) -> WareHouse:
|
||||
"""创建BioYond 1x3x3仓库"""
|
||||
"""创建BioYond 4x1x4仓库"""
|
||||
return warehouse_factory(
|
||||
name=name,
|
||||
num_items_x=1,
|
||||
@@ -212,9 +133,8 @@ def bioyond_warehouse_1x3x3(name: str) -> WareHouse:
|
||||
item_dz=120.0,
|
||||
category="warehouse",
|
||||
)
|
||||
|
||||
def bioyond_warehouse_2x1x3(name: str) -> WareHouse:
|
||||
"""创建BioYond 2x1x3仓库"""
|
||||
"""创建BioYond 4x1x4仓库"""
|
||||
return warehouse_factory(
|
||||
name=name,
|
||||
num_items_x=2,
|
||||
@@ -230,7 +150,7 @@ def bioyond_warehouse_2x1x3(name: str) -> WareHouse:
|
||||
)
|
||||
|
||||
def bioyond_warehouse_3x3x1(name: str) -> WareHouse:
|
||||
"""创建BioYond 3x3x1仓库"""
|
||||
"""创建BioYond 4x1x4仓库"""
|
||||
return warehouse_factory(
|
||||
name=name,
|
||||
num_items_x=3,
|
||||
@@ -244,9 +164,8 @@ def bioyond_warehouse_3x3x1(name: str) -> WareHouse:
|
||||
item_dz=120.0,
|
||||
category="warehouse",
|
||||
)
|
||||
|
||||
def bioyond_warehouse_5x1x1(name: str) -> WareHouse:
|
||||
"""已弃用:创建BioYond 5x1x1仓库"""
|
||||
"""创建BioYond 4x1x4仓库"""
|
||||
return warehouse_factory(
|
||||
name=name,
|
||||
num_items_x=5,
|
||||
@@ -260,9 +179,8 @@ def bioyond_warehouse_5x1x1(name: str) -> WareHouse:
|
||||
item_dz=120.0,
|
||||
category="warehouse",
|
||||
)
|
||||
|
||||
def bioyond_warehouse_3x3x1_2(name: str) -> WareHouse:
|
||||
"""已弃用:创建BioYond 3x3x1仓库"""
|
||||
"""创建BioYond 4x1x4仓库"""
|
||||
return warehouse_factory(
|
||||
name=name,
|
||||
num_items_x=3,
|
||||
@@ -294,6 +212,7 @@ def bioyond_warehouse_liquid_and_lid_handling(name: str) -> WareHouse:
|
||||
removed_positions=None
|
||||
)
|
||||
|
||||
|
||||
def bioyond_warehouse_1x8x4(name: str) -> WareHouse:
|
||||
"""创建BioYond 8x4x1反应站堆栈(A01~D08)"""
|
||||
return warehouse_factory(
|
||||
@@ -309,3 +228,58 @@ def bioyond_warehouse_1x8x4(name: str) -> WareHouse:
|
||||
item_dz=130.0,
|
||||
category="warehouse",
|
||||
)
|
||||
|
||||
|
||||
def bioyond_warehouse_reagent_storage(name: str) -> WareHouse:
|
||||
"""创建BioYond站内试剂存放堆栈(A01~A02, 1行×2列)"""
|
||||
return warehouse_factory(
|
||||
name=name,
|
||||
num_items_x=2, # 2列(01-02)
|
||||
num_items_y=1, # 1行(A)
|
||||
num_items_z=1, # 1层
|
||||
dx=10.0,
|
||||
dy=10.0,
|
||||
dz=10.0,
|
||||
item_dx=137.0,
|
||||
item_dy=96.0,
|
||||
item_dz=120.0,
|
||||
category="warehouse",
|
||||
)
|
||||
|
||||
|
||||
def bioyond_warehouse_liquid_preparation(name: str) -> WareHouse:
|
||||
"""创建BioYond移液站内10%分装液体准备仓库(A01~B04)"""
|
||||
return warehouse_factory(
|
||||
name=name,
|
||||
num_items_x=4, # 4列(01-04)
|
||||
num_items_y=2, # 2行(A-B)
|
||||
num_items_z=1, # 1层
|
||||
dx=10.0,
|
||||
dy=10.0,
|
||||
dz=10.0,
|
||||
item_dx=137.0,
|
||||
item_dy=96.0,
|
||||
item_dz=120.0,
|
||||
category="warehouse",
|
||||
col_offset=0,
|
||||
layout="row-major",
|
||||
)
|
||||
|
||||
|
||||
def bioyond_warehouse_tipbox_storage(name: str) -> WareHouse:
|
||||
"""创建BioYond站内Tip盒堆栈(A01~B03),用于存放枪头盒"""
|
||||
return warehouse_factory(
|
||||
name=name,
|
||||
num_items_x=3, # 3列(01-03)
|
||||
num_items_y=2, # 2行(A-B)
|
||||
num_items_z=1, # 1层
|
||||
dx=10.0,
|
||||
dy=10.0,
|
||||
dz=10.0,
|
||||
item_dx=137.0,
|
||||
item_dy=96.0,
|
||||
item_dz=120.0,
|
||||
category="warehouse",
|
||||
col_offset=0,
|
||||
layout="row-major",
|
||||
)
|
||||
|
||||
@@ -22,13 +22,6 @@ class RegularContainer(Container):
|
||||
|
||||
def load_state(self, state: Dict[str, Any]):
|
||||
self.state = state
|
||||
|
||||
|
||||
def get_regular_container(name="container"):
|
||||
r = RegularContainer(name=name)
|
||||
r.category = "container"
|
||||
return RegularContainer(name=name)
|
||||
|
||||
#
|
||||
# class RegularContainer(object):
|
||||
# # 第一个参数必须是id传入
|
||||
|
||||
@@ -45,13 +45,10 @@ def canonicalize_nodes_data(
|
||||
print_status(f"{len(nodes)} Resources loaded:", "info")
|
||||
|
||||
# 第一步:基本预处理(处理graphml的label字段)
|
||||
outer_host_node_id = None
|
||||
for idx, node in enumerate(nodes):
|
||||
for node in nodes:
|
||||
if node.get("label") is not None:
|
||||
node_id = node.pop("label")
|
||||
node["id"] = node["name"] = node_id
|
||||
if node["id"] == "host_node":
|
||||
outer_host_node_id = idx
|
||||
if not isinstance(node.get("config"), dict):
|
||||
node["config"] = {}
|
||||
if not node.get("type"):
|
||||
@@ -61,26 +58,25 @@ def canonicalize_nodes_data(
|
||||
node["name"] = node.get("id")
|
||||
print_status(f"Warning: Node {node.get('id', 'unknown')} missing 'name', defaulting to {node['name']}", "warning")
|
||||
if not isinstance(node.get("position"), dict):
|
||||
node["pose"] = {"position": {}}
|
||||
node["position"] = {"position": {}}
|
||||
x = node.pop("x", None)
|
||||
if x is not None:
|
||||
node["pose"]["position"]["x"] = x
|
||||
node["position"]["position"]["x"] = x
|
||||
y = node.pop("y", None)
|
||||
if y is not None:
|
||||
node["pose"]["position"]["y"] = y
|
||||
node["position"]["position"]["y"] = y
|
||||
z = node.pop("z", None)
|
||||
if z is not None:
|
||||
node["pose"]["position"]["z"] = z
|
||||
node["position"]["position"]["z"] = z
|
||||
if "sample_id" in node:
|
||||
sample_id = node.pop("sample_id")
|
||||
if sample_id:
|
||||
logger.error(f"{node}的sample_id参数已弃用,sample_id: {sample_id}")
|
||||
for k in list(node.keys()):
|
||||
if k not in ["id", "uuid", "name", "description", "schema", "model", "icon", "parent_uuid", "parent", "type", "class", "position", "config", "data", "children", "pose"]:
|
||||
if k not in ["id", "uuid", "name", "description", "schema", "model", "icon", "parent_uuid", "parent", "type", "class", "position", "config", "data", "children"]:
|
||||
v = node.pop(k)
|
||||
node["config"][k] = v
|
||||
if outer_host_node_id is not None:
|
||||
nodes.pop(outer_host_node_id)
|
||||
|
||||
# 第二步:处理parent_relation
|
||||
id2idx = {node["id"]: idx for idx, node in enumerate(nodes)}
|
||||
for parent, children in parent_relation.items():
|
||||
@@ -97,7 +93,7 @@ def canonicalize_nodes_data(
|
||||
|
||||
for node in nodes:
|
||||
try:
|
||||
# print_status(f"DeviceId: {node['id']}, Class: {node['class']}", "info")
|
||||
print_status(f"DeviceId: {node['id']}, Class: {node['class']}", "info")
|
||||
# 使用标准化方法
|
||||
resource_instance = ResourceDictInstance.get_resource_instance_from_dict(node)
|
||||
known_nodes[node["id"]] = resource_instance
|
||||
@@ -586,15 +582,11 @@ def resource_plr_to_ulab(resource_plr: "ResourcePLR", parent_name: str = None, w
|
||||
"tip_rack": "tip_rack",
|
||||
"warehouse": "warehouse",
|
||||
"container": "container",
|
||||
"tube": "tube",
|
||||
"bottle_carrier": "bottle_carrier",
|
||||
"plate_adapter": "plate_adapter",
|
||||
}
|
||||
if source in replace_info:
|
||||
return replace_info[source]
|
||||
else:
|
||||
if source is not None:
|
||||
logger.warning(f"转换pylabrobot的时候,出现未知类型: {source}")
|
||||
logger.warning(f"转换pylabrobot的时候,出现未知类型: {source}")
|
||||
return source
|
||||
|
||||
def resource_plr_to_ulab_inner(d: dict, all_states: dict, child=True) -> dict:
|
||||
|
||||
@@ -19,9 +19,6 @@ def warehouse_factory(
|
||||
item_dx: float = 10.0,
|
||||
item_dy: float = 10.0,
|
||||
item_dz: float = 10.0,
|
||||
resource_size_x: float = 127.0,
|
||||
resource_size_y: float = 86.0,
|
||||
resource_size_z: float = 25.0,
|
||||
removed_positions: Optional[List[int]] = None,
|
||||
empty: bool = False,
|
||||
category: str = "warehouse",
|
||||
@@ -53,9 +50,8 @@ def warehouse_factory(
|
||||
_sites = create_homogeneous_resources(
|
||||
klass=ResourceHolder,
|
||||
locations=locations,
|
||||
resource_size_x=resource_size_x,
|
||||
resource_size_y=resource_size_y,
|
||||
resource_size_z=resource_size_z,
|
||||
resource_size_x=127.0,
|
||||
resource_size_y=86.0,
|
||||
name_prefix=name,
|
||||
)
|
||||
len_x, len_y = (num_items_x, num_items_y) if num_items_z == 1 else (num_items_y, num_items_z) if num_items_x == 1 else (num_items_x, num_items_z)
|
||||
@@ -146,4 +142,4 @@ class WareHouse(ItemizedCarrier):
|
||||
|
||||
def get_rack_at_position(self, row: int, col: int, layer: int):
|
||||
site = self.get_site_by_layer_position(row, col, layer)
|
||||
return site.resource
|
||||
return site.resource
|
||||
@@ -5,7 +5,6 @@ from unilabos.ros.msgs.message_converter import (
|
||||
get_action_type,
|
||||
)
|
||||
from unilabos.ros.nodes.base_device_node import init_wrapper, ROS2DeviceNode
|
||||
from unilabos.ros.nodes.resource_tracker import ResourceDictInstance
|
||||
|
||||
# 定义泛型类型变量
|
||||
T = TypeVar("T")
|
||||
@@ -19,11 +18,12 @@ class ROS2DeviceNodeWrapper(ROS2DeviceNode):
|
||||
|
||||
def ros2_device_node(
|
||||
cls: Type[T],
|
||||
device_config: Optional[ResourceDictInstance] = None,
|
||||
device_config: Optional[Dict[str, Any]] = None,
|
||||
status_types: Optional[Dict[str, Any]] = None,
|
||||
action_value_mappings: Optional[Dict[str, Any]] = None,
|
||||
hardware_interface: Optional[Dict[str, Any]] = None,
|
||||
print_publish: bool = False,
|
||||
children: Optional[Dict[str, Any]] = None,
|
||||
) -> Type[ROS2DeviceNodeWrapper]:
|
||||
"""Create a ROS2 Node class for a device class with properties and actions.
|
||||
|
||||
@@ -45,7 +45,7 @@ def ros2_device_node(
|
||||
if status_types is None:
|
||||
status_types = {}
|
||||
if device_config is None:
|
||||
raise ValueError("device_config cannot be None")
|
||||
device_config = {}
|
||||
if action_value_mappings is None:
|
||||
action_value_mappings = {}
|
||||
if hardware_interface is None:
|
||||
@@ -82,6 +82,7 @@ def ros2_device_node(
|
||||
action_value_mappings=action_value_mappings,
|
||||
hardware_interface=hardware_interface,
|
||||
print_publish=print_publish,
|
||||
children=children,
|
||||
*args,
|
||||
**kwargs,
|
||||
),
|
||||
|
||||
@@ -4,14 +4,13 @@ from typing import Optional
|
||||
from unilabos.registry.registry import lab_registry
|
||||
from unilabos.ros.device_node_wrapper import ros2_device_node
|
||||
from unilabos.ros.nodes.base_device_node import ROS2DeviceNode, DeviceInitError
|
||||
from unilabos.ros.nodes.resource_tracker import ResourceDictInstance
|
||||
from unilabos.utils import logger
|
||||
from unilabos.utils.exception import DeviceClassInvalid
|
||||
from unilabos.utils.import_manager import default_manager
|
||||
|
||||
|
||||
|
||||
def initialize_device_from_dict(device_id, device_config: ResourceDictInstance) -> Optional[ROS2DeviceNode]:
|
||||
def initialize_device_from_dict(device_id, device_config) -> Optional[ROS2DeviceNode]:
|
||||
"""Initializes a device based on its configuration.
|
||||
|
||||
This function dynamically imports the appropriate device class and creates an instance of it using the provided device configuration.
|
||||
@@ -25,14 +24,15 @@ def initialize_device_from_dict(device_id, device_config: ResourceDictInstance)
|
||||
None
|
||||
"""
|
||||
d = None
|
||||
device_class_config = device_config.res_content.klass
|
||||
uid = device_config.res_content.uuid
|
||||
original_device_config = copy.deepcopy(device_config)
|
||||
device_class_config = device_config["class"]
|
||||
uid = device_config["uuid"]
|
||||
if isinstance(device_class_config, str): # 如果是字符串,则直接去lab_registry中查找,获取class
|
||||
if len(device_class_config) == 0:
|
||||
raise DeviceClassInvalid(f"Device [{device_id}] class cannot be an empty string. {device_config}")
|
||||
if device_class_config not in lab_registry.device_type_registry:
|
||||
raise DeviceClassInvalid(f"Device [{device_id}] class {device_class_config} not found. {device_config}")
|
||||
device_class_config = lab_registry.device_type_registry[device_class_config]["class"]
|
||||
device_class_config = device_config["class"] = lab_registry.device_type_registry[device_class_config]["class"]
|
||||
elif isinstance(device_class_config, dict):
|
||||
raise DeviceClassInvalid(f"Device [{device_id}] class config should be type 'str' but 'dict' got. {device_config}")
|
||||
if isinstance(device_class_config, dict):
|
||||
@@ -41,16 +41,17 @@ def initialize_device_from_dict(device_id, device_config: ResourceDictInstance)
|
||||
DEVICE = ros2_device_node(
|
||||
DEVICE,
|
||||
status_types=device_class_config.get("status_types", {}),
|
||||
device_config=device_config,
|
||||
device_config=original_device_config,
|
||||
action_value_mappings=device_class_config.get("action_value_mappings", {}),
|
||||
hardware_interface=device_class_config.get(
|
||||
"hardware_interface",
|
||||
{"name": "hardware_interface", "write": "send_command", "read": "read_data", "extra_info": []},
|
||||
)
|
||||
),
|
||||
children=device_config.get("children", {})
|
||||
)
|
||||
try:
|
||||
d = DEVICE(
|
||||
device_id=device_id, device_uuid=uid, driver_is_ros=device_class_config["type"] == "ros2", driver_params=device_config.res_content.config
|
||||
device_id=device_id, device_uuid=uid, driver_is_ros=device_class_config["type"] == "ros2", driver_params=device_config.get("config", {})
|
||||
)
|
||||
except DeviceInitError as ex:
|
||||
return d
|
||||
|
||||
@@ -192,7 +192,7 @@ def slave(
|
||||
for device_config in devices_config.root_nodes:
|
||||
device_id = device_config.res_content.id
|
||||
if device_config.res_content.type == "device":
|
||||
d = initialize_device_from_dict(device_id, device_config)
|
||||
d = initialize_device_from_dict(device_id, device_config.get_nested_dict())
|
||||
if d is not None:
|
||||
devices_instances[device_id] = d
|
||||
logger.info(f"Device {device_id} initialized.")
|
||||
|
||||
@@ -48,7 +48,7 @@ from unilabos_msgs.msg import Resource # type: ignore
|
||||
from unilabos.ros.nodes.resource_tracker import (
|
||||
DeviceNodeResourceTracker,
|
||||
ResourceTreeSet,
|
||||
ResourceTreeInstance, ResourceDictInstance,
|
||||
ResourceTreeInstance,
|
||||
)
|
||||
from unilabos.ros.x.rclpyx import get_event_loop
|
||||
from unilabos.ros.utils.driver_creator import WorkstationNodeCreator, PyLabRobotCreator, DeviceClassCreator
|
||||
@@ -133,11 +133,12 @@ def init_wrapper(
|
||||
device_id: str,
|
||||
device_uuid: str,
|
||||
driver_class: type[T],
|
||||
device_config: ResourceTreeInstance,
|
||||
device_config: Dict[str, Any],
|
||||
status_types: Dict[str, Any],
|
||||
action_value_mappings: Dict[str, Any],
|
||||
hardware_interface: Dict[str, Any],
|
||||
print_publish: bool,
|
||||
children: Optional[list] = None,
|
||||
driver_params: Optional[Dict[str, Any]] = None,
|
||||
driver_is_ros: bool = False,
|
||||
*args,
|
||||
@@ -146,6 +147,8 @@ def init_wrapper(
|
||||
"""初始化设备节点的包装函数,和ROS2DeviceNode初始化保持一致"""
|
||||
if driver_params is None:
|
||||
driver_params = kwargs.copy()
|
||||
if children is None:
|
||||
children = []
|
||||
kwargs["device_id"] = device_id
|
||||
kwargs["device_uuid"] = device_uuid
|
||||
kwargs["driver_class"] = driver_class
|
||||
@@ -154,6 +157,7 @@ def init_wrapper(
|
||||
kwargs["status_types"] = status_types
|
||||
kwargs["action_value_mappings"] = action_value_mappings
|
||||
kwargs["hardware_interface"] = hardware_interface
|
||||
kwargs["children"] = children
|
||||
kwargs["print_publish"] = print_publish
|
||||
kwargs["driver_is_ros"] = driver_is_ros
|
||||
super(type(self), self).__init__(*args, **kwargs)
|
||||
@@ -582,7 +586,7 @@ class BaseROS2DeviceNode(Node, Generic[T]):
|
||||
except Exception as e:
|
||||
self.lab_logger().error(f"更新资源uuid失败: {e}")
|
||||
self.lab_logger().error(traceback.format_exc())
|
||||
self.lab_logger().trace(f"资源更新结果: {response}")
|
||||
self.lab_logger().debug(f"资源更新结果: {response}")
|
||||
|
||||
async def get_resource(self, resources_uuid: List[str], with_children: bool = True) -> ResourceTreeSet:
|
||||
"""
|
||||
@@ -1140,7 +1144,7 @@ class BaseROS2DeviceNode(Node, Generic[T]):
|
||||
queried_resources = []
|
||||
for resource_data in resource_inputs:
|
||||
plr_resource = await self.get_resource_with_dir(
|
||||
resource_id=resource_data["id"], with_children=True
|
||||
resource_ids=resource_data["id"], with_children=True
|
||||
)
|
||||
queried_resources.append(plr_resource)
|
||||
|
||||
@@ -1164,6 +1168,7 @@ class BaseROS2DeviceNode(Node, Generic[T]):
|
||||
execution_error = traceback.format_exc()
|
||||
break
|
||||
|
||||
##### self.lab_logger().info(f"准备执行: {action_kwargs}, 函数: {ACTION.__name__}")
|
||||
time_start = time.time()
|
||||
time_overall = 100
|
||||
future = None
|
||||
@@ -1171,36 +1176,35 @@ class BaseROS2DeviceNode(Node, Generic[T]):
|
||||
# 将阻塞操作放入线程池执行
|
||||
if asyncio.iscoroutinefunction(ACTION):
|
||||
try:
|
||||
self.lab_logger().trace(f"异步执行动作 {ACTION}")
|
||||
def _handle_future_exception(fut: Future):
|
||||
##### self.lab_logger().info(f"异步执行动作 {ACTION}")
|
||||
future = ROS2DeviceNode.run_async_func(ACTION, trace_error=False, **action_kwargs)
|
||||
|
||||
def _handle_future_exception(fut):
|
||||
nonlocal execution_error, execution_success, action_return_value
|
||||
try:
|
||||
action_return_value = fut.result()
|
||||
if isinstance(action_return_value, BaseException):
|
||||
raise action_return_value
|
||||
execution_success = True
|
||||
except Exception as _:
|
||||
except Exception as e:
|
||||
execution_error = traceback.format_exc()
|
||||
error(
|
||||
f"异步任务 {ACTION.__name__} 报错了\n{traceback.format_exc()}\n原始输入:{action_kwargs}"
|
||||
)
|
||||
|
||||
future = ROS2DeviceNode.run_async_func(ACTION, trace_error=False, **action_kwargs)
|
||||
future.add_done_callback(_handle_future_exception)
|
||||
except Exception as e:
|
||||
execution_error = traceback.format_exc()
|
||||
execution_success = False
|
||||
self.lab_logger().error(f"创建异步任务失败: {traceback.format_exc()}")
|
||||
else:
|
||||
self.lab_logger().trace(f"同步执行动作 {ACTION}")
|
||||
##### self.lab_logger().info(f"同步执行动作 {ACTION}")
|
||||
future = self._executor.submit(ACTION, **action_kwargs)
|
||||
|
||||
def _handle_future_exception(fut: Future):
|
||||
def _handle_future_exception(fut):
|
||||
nonlocal execution_error, execution_success, action_return_value
|
||||
try:
|
||||
action_return_value = fut.result()
|
||||
execution_success = True
|
||||
except Exception as _:
|
||||
except Exception as e:
|
||||
execution_error = traceback.format_exc()
|
||||
error(
|
||||
f"同步任务 {ACTION.__name__} 报错了\n{traceback.format_exc()}\n原始输入:{action_kwargs}"
|
||||
@@ -1305,7 +1309,7 @@ class BaseROS2DeviceNode(Node, Generic[T]):
|
||||
get_result_info_str(execution_error, execution_success, action_return_value),
|
||||
)
|
||||
|
||||
self.lab_logger().trace(f"动作 {action_name} 完成并返回结果")
|
||||
##### self.lab_logger().info(f"动作 {action_name} 完成并返回结果")
|
||||
return result_msg
|
||||
|
||||
return execute_callback
|
||||
@@ -1540,29 +1544,17 @@ class ROS2DeviceNode:
|
||||
这个类封装了设备类实例和ROS2节点的功能,提供ROS2接口。
|
||||
它不继承设备类,而是通过代理模式访问设备类的属性和方法。
|
||||
"""
|
||||
@staticmethod
|
||||
async def safe_task_wrapper(trace_callback, func, **kwargs):
|
||||
try:
|
||||
if callable(trace_callback):
|
||||
trace_callback(await func(**kwargs))
|
||||
return await func(**kwargs)
|
||||
except Exception as e:
|
||||
if callable(trace_callback):
|
||||
trace_callback(e)
|
||||
return e
|
||||
|
||||
@classmethod
|
||||
def run_async_func(cls, func, trace_error=True, inner_trace_callback=None, **kwargs) -> Task:
|
||||
def _handle_future_exception(fut: Future):
|
||||
def run_async_func(cls, func, trace_error=True, **kwargs) -> Task:
|
||||
def _handle_future_exception(fut):
|
||||
try:
|
||||
ret = fut.result()
|
||||
if isinstance(ret, BaseException):
|
||||
raise ret
|
||||
fut.result()
|
||||
except Exception as e:
|
||||
error(f"异步任务 {func.__name__} 获取结果失败")
|
||||
error(f"异步任务 {func.__name__} 报错了")
|
||||
error(traceback.format_exc())
|
||||
|
||||
future = rclpy.get_global_executor().create_task(ROS2DeviceNode.safe_task_wrapper(inner_trace_callback, func, **kwargs))
|
||||
future = rclpy.get_global_executor().create_task(func(**kwargs))
|
||||
if trace_error:
|
||||
future.add_done_callback(_handle_future_exception)
|
||||
return future
|
||||
@@ -1590,11 +1582,12 @@ class ROS2DeviceNode:
|
||||
device_id: str,
|
||||
device_uuid: str,
|
||||
driver_class: Type[T],
|
||||
device_config: ResourceDictInstance,
|
||||
device_config: Dict[str, Any],
|
||||
driver_params: Dict[str, Any],
|
||||
status_types: Dict[str, Any],
|
||||
action_value_mappings: Dict[str, Any],
|
||||
hardware_interface: Dict[str, Any],
|
||||
children: Dict[str, Any],
|
||||
print_publish: bool = True,
|
||||
driver_is_ros: bool = False,
|
||||
):
|
||||
@@ -1605,7 +1598,7 @@ class ROS2DeviceNode:
|
||||
device_id: 设备标识符
|
||||
device_uuid: 设备uuid
|
||||
driver_class: 设备类
|
||||
device_config: 原始初始化的ResourceDictInstance
|
||||
device_config: 原始初始化的json
|
||||
driver_params: driver初始化的参数
|
||||
status_types: 状态类型映射
|
||||
action_value_mappings: 动作值映射
|
||||
@@ -1619,7 +1612,6 @@ class ROS2DeviceNode:
|
||||
self._has_async_context = hasattr(driver_class, "__aenter__") and hasattr(driver_class, "__aexit__")
|
||||
self._driver_class = driver_class
|
||||
self.device_config = device_config
|
||||
children: List[ResourceDictInstance] = device_config.children
|
||||
self.driver_is_ros = driver_is_ros
|
||||
self.driver_is_workstation = False
|
||||
self.resource_tracker = DeviceNodeResourceTracker()
|
||||
|
||||
@@ -5,7 +5,7 @@ import threading
|
||||
import time
|
||||
import traceback
|
||||
import uuid
|
||||
from typing import TYPE_CHECKING, Optional, Dict, Any, List, ClassVar, Set, TypedDict, Union
|
||||
from typing import TYPE_CHECKING, Optional, Dict, Any, List, ClassVar, Set, Union
|
||||
|
||||
from action_msgs.msg import GoalStatus
|
||||
from geometry_msgs.msg import Point
|
||||
@@ -38,7 +38,6 @@ from unilabos.ros.msgs.message_converter import (
|
||||
from unilabos.ros.nodes.base_device_node import BaseROS2DeviceNode, ROS2DeviceNode, DeviceNodeResourceTracker
|
||||
from unilabos.ros.nodes.presets.controller_node import ControllerNode
|
||||
from unilabos.ros.nodes.resource_tracker import (
|
||||
ResourceDict,
|
||||
ResourceDictInstance,
|
||||
ResourceTreeSet,
|
||||
ResourceTreeInstance,
|
||||
@@ -49,7 +48,7 @@ from unilabos.utils.type_check import serialize_result_info
|
||||
from unilabos.registry.placeholder_type import ResourceSlot, DeviceSlot
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from unilabos.app.ws_client import QueueItem
|
||||
from unilabos.app.ws_client import QueueItem, WSResourceChatData
|
||||
|
||||
|
||||
@dataclass
|
||||
@@ -57,11 +56,6 @@ class DeviceActionStatus:
|
||||
job_ids: Dict[str, float] = field(default_factory=dict)
|
||||
|
||||
|
||||
class TestResourceReturn(TypedDict):
|
||||
resources: List[List[ResourceDict]]
|
||||
devices: List[DeviceSlot]
|
||||
|
||||
|
||||
class HostNode(BaseROS2DeviceNode):
|
||||
"""
|
||||
主机节点类,负责管理设备、资源和控制器
|
||||
@@ -289,12 +283,6 @@ class HostNode(BaseROS2DeviceNode):
|
||||
self.lab_logger().info("[Host Node] Host node initialized.")
|
||||
HostNode._ready_event.set()
|
||||
|
||||
# 发送host_node ready信号到所有桥接器
|
||||
for bridge in self.bridges:
|
||||
if hasattr(bridge, "publish_host_ready"):
|
||||
bridge.publish_host_ready()
|
||||
self.lab_logger().debug(f"Host ready signal sent via {bridge.__class__.__name__}")
|
||||
|
||||
def _send_re_register(self, sclient):
|
||||
sclient.wait_for_service()
|
||||
request = SerialCommand.Request()
|
||||
@@ -538,7 +526,7 @@ class HostNode(BaseROS2DeviceNode):
|
||||
self.lab_logger().info(f"[Host Node] Initializing device: {device_id}")
|
||||
|
||||
try:
|
||||
d = initialize_device_from_dict(device_id, device_config)
|
||||
d = initialize_device_from_dict(device_id, device_config.get_nested_dict())
|
||||
except DeviceClassInvalid as e:
|
||||
self.lab_logger().error(f"[Host Node] Device class invalid: {e}")
|
||||
d = None
|
||||
@@ -718,7 +706,7 @@ class HostNode(BaseROS2DeviceNode):
|
||||
feedback_callback=lambda feedback_msg: self.feedback_callback(item, action_id, feedback_msg),
|
||||
goal_uuid=goal_uuid_obj,
|
||||
)
|
||||
future.add_done_callback(lambda f: self.goal_response_callback(item, action_id, f))
|
||||
future.add_done_callback(lambda future: self.goal_response_callback(item, action_id, future))
|
||||
|
||||
def goal_response_callback(self, item: "QueueItem", action_id: str, future) -> None:
|
||||
"""目标响应回调"""
|
||||
@@ -729,11 +717,9 @@ class HostNode(BaseROS2DeviceNode):
|
||||
|
||||
self.lab_logger().info(f"[Host Node] Goal {action_id} ({item.job_id}) accepted")
|
||||
self._goals[item.job_id] = goal_handle
|
||||
goal_future = goal_handle.get_result_async()
|
||||
goal_future.add_done_callback(
|
||||
lambda f: self.get_result_callback(item, action_id, f)
|
||||
goal_handle.get_result_async().add_done_callback(
|
||||
lambda future: self.get_result_callback(item, action_id, future)
|
||||
)
|
||||
goal_future.result()
|
||||
|
||||
def feedback_callback(self, item: "QueueItem", action_id: str, feedback_msg) -> None:
|
||||
"""反馈回调"""
|
||||
@@ -766,12 +752,6 @@ class HostNode(BaseROS2DeviceNode):
|
||||
if return_info_str is not None:
|
||||
try:
|
||||
return_info = json.loads(return_info_str)
|
||||
# 适配后端的一些额外处理
|
||||
return_value = return_info.get("return_value")
|
||||
if isinstance(return_value, dict):
|
||||
unilabos_samples = return_info.get("unilabos_samples")
|
||||
if isinstance(unilabos_samples, list):
|
||||
return_info["unilabos_samples"] = unilabos_samples
|
||||
suc = return_info.get("suc", False)
|
||||
if not suc:
|
||||
status = "failed"
|
||||
@@ -799,17 +779,6 @@ class HostNode(BaseROS2DeviceNode):
|
||||
del self._goals[job_id]
|
||||
self.lab_logger().debug(f"[Host Node] Removed goal {job_id[:8]} from _goals")
|
||||
|
||||
# 存储结果供 HTTP API 查询
|
||||
try:
|
||||
from unilabos.app.web.controller import store_job_result
|
||||
|
||||
if goal_status == GoalStatus.STATUS_CANCELED:
|
||||
store_job_result(job_id, status, return_info, {})
|
||||
else:
|
||||
store_job_result(job_id, status, return_info, result_data)
|
||||
except ImportError:
|
||||
pass # controller 模块可能未加载
|
||||
|
||||
# 发布状态到桥接器
|
||||
if job_id:
|
||||
for bridge in self.bridges:
|
||||
@@ -1157,12 +1126,11 @@ class HostNode(BaseROS2DeviceNode):
|
||||
响应对象,包含查询到的资源
|
||||
"""
|
||||
try:
|
||||
from unilabos.app.web import http_client
|
||||
data = json.loads(request.command)
|
||||
if "uuid" in data and data["uuid"] is not None:
|
||||
http_req = http_client.resource_tree_get([data["uuid"]], data["with_children"])
|
||||
http_req = self.bridges[-1].resource_tree_get([data["uuid"]], data["with_children"])
|
||||
elif "id" in data and data["id"].startswith("/"):
|
||||
http_req = http_client.resource_get(data["id"], data["with_children"])
|
||||
http_req = self.bridges[-1].resource_get(data["id"], data["with_children"])
|
||||
else:
|
||||
raise ValueError("没有使用正确的物料 id 或 uuid")
|
||||
response.response = json.dumps(http_req["data"])
|
||||
@@ -1372,7 +1340,7 @@ class HostNode(BaseROS2DeviceNode):
|
||||
|
||||
def test_resource(
|
||||
self, resource: ResourceSlot, resources: List[ResourceSlot], device: DeviceSlot, devices: List[DeviceSlot]
|
||||
) -> TestResourceReturn:
|
||||
):
|
||||
return {
|
||||
"resources": ResourceTreeSet.from_plr_resources([resource, *resources]).dump(),
|
||||
"devices": [device, *devices],
|
||||
|
||||
@@ -24,7 +24,7 @@ from unilabos.ros.msgs.message_converter import (
|
||||
convert_from_ros_msg_with_mapping,
|
||||
)
|
||||
from unilabos.ros.nodes.base_device_node import BaseROS2DeviceNode, DeviceNodeResourceTracker, ROS2DeviceNode
|
||||
from unilabos.ros.nodes.resource_tracker import ResourceTreeSet, ResourceDictInstance
|
||||
from unilabos.ros.nodes.resource_tracker import ResourceTreeSet
|
||||
from unilabos.utils.type_check import get_result_info_str
|
||||
|
||||
if TYPE_CHECKING:
|
||||
@@ -47,7 +47,7 @@ class ROS2WorkstationNode(BaseROS2DeviceNode):
|
||||
def __init__(
|
||||
self,
|
||||
protocol_type: List[str],
|
||||
children: List[ResourceDictInstance],
|
||||
children: Dict[str, Any],
|
||||
*,
|
||||
driver_instance: "WorkstationBase",
|
||||
device_id: str,
|
||||
@@ -81,11 +81,10 @@ class ROS2WorkstationNode(BaseROS2DeviceNode):
|
||||
# 初始化子设备
|
||||
self.communication_node_id_to_instance = {}
|
||||
|
||||
for device_config in self.children:
|
||||
device_id = device_config.res_content.id
|
||||
if device_config.res_content.type != "device":
|
||||
for device_id, device_config in self.children.items():
|
||||
if device_config.get("type", "device") != "device":
|
||||
self.lab_logger().debug(
|
||||
f"[Protocol Node] Skipping type {device_config.res_content.type} {device_id} already existed, skipping."
|
||||
f"[Protocol Node] Skipping type {device_config['type']} {device_id} already existed, skipping."
|
||||
)
|
||||
continue
|
||||
try:
|
||||
@@ -102,9 +101,8 @@ class ROS2WorkstationNode(BaseROS2DeviceNode):
|
||||
self.communication_node_id_to_instance[device_id] = d
|
||||
continue
|
||||
|
||||
for device_config in self.children:
|
||||
device_id = device_config.res_content.id
|
||||
if device_config.res_content.type != "device":
|
||||
for device_id, device_config in self.children.items():
|
||||
if device_config.get("type", "device") != "device":
|
||||
continue
|
||||
# 设置硬件接口代理
|
||||
if device_id not in self.sub_devices:
|
||||
|
||||
@@ -1,11 +1,9 @@
|
||||
import inspect
|
||||
import traceback
|
||||
import uuid
|
||||
from pydantic import BaseModel, field_serializer, field_validator
|
||||
from pydantic import Field
|
||||
from typing import List, Tuple, Any, Dict, Literal, Optional, cast, TYPE_CHECKING, Union
|
||||
|
||||
from unilabos.resources.plr_additional_res_reg import register
|
||||
from unilabos.utils.log import logger
|
||||
|
||||
if TYPE_CHECKING:
|
||||
@@ -64,6 +62,7 @@ class ResourceDict(BaseModel):
|
||||
parent: Optional["ResourceDict"] = Field(description="Parent resource object", default=None, exclude=True)
|
||||
type: Union[Literal["device"], str] = Field(description="Resource type")
|
||||
klass: str = Field(alias="class", description="Resource class name")
|
||||
position: ResourceDictPosition = Field(description="Resource position", default_factory=ResourceDictPosition)
|
||||
pose: ResourceDictPosition = Field(description="Resource position", default_factory=ResourceDictPosition)
|
||||
config: Dict[str, Any] = Field(description="Resource configuration")
|
||||
data: Dict[str, Any] = Field(description="Resource data")
|
||||
@@ -147,16 +146,15 @@ class ResourceDictInstance(object):
|
||||
if not content.get("extra"): # MagicCode
|
||||
content["extra"] = {}
|
||||
if "pose" not in content:
|
||||
content["pose"] = content.pop("position", {})
|
||||
content["pose"] = content.get("position", {})
|
||||
return ResourceDictInstance(ResourceDict.model_validate(content))
|
||||
|
||||
def get_plr_nested_dict(self) -> Dict[str, Any]:
|
||||
def get_nested_dict(self) -> Dict[str, Any]:
|
||||
"""获取资源实例的嵌套字典表示"""
|
||||
res_dict = self.res_content.model_dump(by_alias=True)
|
||||
res_dict["children"] = {child.res_content.id: child.get_plr_nested_dict() for child in self.children}
|
||||
res_dict["children"] = {child.res_content.id: child.get_nested_dict() for child in self.children}
|
||||
res_dict["parent"] = self.res_content.parent_instance_name
|
||||
res_dict["position"] = self.res_content.pose.position.model_dump()
|
||||
del res_dict["pose"]
|
||||
res_dict["position"] = self.res_content.position.position.model_dump()
|
||||
return res_dict
|
||||
|
||||
|
||||
@@ -431,9 +429,9 @@ class ResourceTreeSet(object):
|
||||
Returns:
|
||||
List[PLRResource]: PLR 资源实例列表
|
||||
"""
|
||||
register()
|
||||
from pylabrobot.resources import Resource as PLRResource
|
||||
from pylabrobot.utils.object_parsing import find_subclass
|
||||
import inspect
|
||||
|
||||
# 类型映射
|
||||
TYPE_MAP = {"plate": "Plate", "well": "Well", "deck": "Deck", "container": "RegularContainer"}
|
||||
@@ -461,9 +459,9 @@ class ResourceTreeSet(object):
|
||||
"size_y": res.config.get("size_y", 0),
|
||||
"size_z": res.config.get("size_z", 0),
|
||||
"location": {
|
||||
"x": res.pose.position.x,
|
||||
"y": res.pose.position.y,
|
||||
"z": res.pose.position.z,
|
||||
"x": res.position.position.x,
|
||||
"y": res.position.position.y,
|
||||
"z": res.position.position.z,
|
||||
"type": "Coordinate",
|
||||
},
|
||||
"rotation": {"x": 0, "y": 0, "z": 0, "type": "Rotation"},
|
||||
|
||||
@@ -9,11 +9,10 @@ import asyncio
|
||||
import inspect
|
||||
import traceback
|
||||
from abc import abstractmethod
|
||||
from typing import Type, Any, Dict, Optional, TypeVar, Generic, List
|
||||
from typing import Type, Any, Dict, Optional, TypeVar, Generic
|
||||
|
||||
from unilabos.resources.graphio import nested_dict_to_list, resource_ulab_to_plr
|
||||
from unilabos.ros.nodes.resource_tracker import DeviceNodeResourceTracker, ResourceTreeSet, ResourceDictInstance, \
|
||||
ResourceTreeInstance
|
||||
from unilabos.ros.nodes.resource_tracker import DeviceNodeResourceTracker
|
||||
from unilabos.utils import logger, import_manager
|
||||
from unilabos.utils.cls_creator import create_instance_from_config
|
||||
|
||||
@@ -34,7 +33,7 @@ class DeviceClassCreator(Generic[T]):
|
||||
这个类提供了从任意类创建实例的通用方法。
|
||||
"""
|
||||
|
||||
def __init__(self, cls: Type[T], children: List[ResourceDictInstance], resource_tracker: DeviceNodeResourceTracker):
|
||||
def __init__(self, cls: Type[T], children: Dict[str, Any], resource_tracker: DeviceNodeResourceTracker):
|
||||
"""
|
||||
初始化设备类创建器
|
||||
|
||||
@@ -51,9 +50,9 @@ class DeviceClassCreator(Generic[T]):
|
||||
附加资源到设备类实例
|
||||
"""
|
||||
if self.device_instance is not None:
|
||||
for c in self.children:
|
||||
if c.res_content.type != "device":
|
||||
self.resource_tracker.add_resource(c.get_plr_nested_dict())
|
||||
for c in self.children.values():
|
||||
if c["type"] != "device":
|
||||
self.resource_tracker.add_resource(c)
|
||||
|
||||
def create_instance(self, data: Dict[str, Any]) -> T:
|
||||
"""
|
||||
@@ -95,7 +94,7 @@ class PyLabRobotCreator(DeviceClassCreator[T]):
|
||||
这个类提供了针对PyLabRobot设备类的实例创建方法,特别处理deserialize方法。
|
||||
"""
|
||||
|
||||
def __init__(self, cls: Type[T], children: List[ResourceDictInstance], resource_tracker: DeviceNodeResourceTracker):
|
||||
def __init__(self, cls: Type[T], children: Dict[str, Any], resource_tracker: DeviceNodeResourceTracker):
|
||||
"""
|
||||
初始化PyLabRobot设备类创建器
|
||||
|
||||
@@ -112,12 +111,12 @@ class PyLabRobotCreator(DeviceClassCreator[T]):
|
||||
def attach_resource(self):
|
||||
pass # 只能增加实例化物料,原来默认物料仅为字典查询
|
||||
|
||||
# def _process_resource_mapping(self, resource, source_type):
|
||||
# if source_type == dict:
|
||||
# from pylabrobot.resources.resource import Resource
|
||||
#
|
||||
# return nested_dict_to_list(resource), Resource
|
||||
# return resource, source_type
|
||||
def _process_resource_mapping(self, resource, source_type):
|
||||
if source_type == dict:
|
||||
from pylabrobot.resources.resource import Resource
|
||||
|
||||
return nested_dict_to_list(resource), Resource
|
||||
return resource, source_type
|
||||
|
||||
def _process_resource_references(
|
||||
self, data: Any, to_dict=False, states=None, prefix_path="", name_to_uuid=None
|
||||
@@ -143,21 +142,15 @@ class PyLabRobotCreator(DeviceClassCreator[T]):
|
||||
if isinstance(data, dict):
|
||||
if "_resource_child_name" in data:
|
||||
child_name = data["_resource_child_name"]
|
||||
resource: Optional[ResourceDictInstance] = None
|
||||
for child in self.children:
|
||||
if child.res_content.name == child_name:
|
||||
resource = child
|
||||
if resource is not None:
|
||||
if child_name in self.children:
|
||||
resource = self.children[child_name]
|
||||
if "_resource_type" in data:
|
||||
type_path = data["_resource_type"]
|
||||
try:
|
||||
# target_type = import_manager.get_class(type_path)
|
||||
# contain_model = not issubclass(target_type, Deck)
|
||||
# resource, target_type = self._process_resource_mapping(resource, target_type)
|
||||
res_tree = ResourceTreeInstance(resource)
|
||||
res_tree_set = ResourceTreeSet([res_tree])
|
||||
resource_instance: Resource = res_tree_set.to_plr_resources()[0]
|
||||
# resource_instance: Resource = resource_ulab_to_plr(resource, contain_model) # 带state
|
||||
target_type = import_manager.get_class(type_path)
|
||||
contain_model = not issubclass(target_type, Deck)
|
||||
resource, target_type = self._process_resource_mapping(resource, target_type)
|
||||
resource_instance: Resource = resource_ulab_to_plr(resource, contain_model) # 带state
|
||||
states[prefix_path] = resource_instance.serialize_all_state()
|
||||
# 使用 prefix_path 作为 key 存储资源状态
|
||||
if to_dict:
|
||||
@@ -209,12 +202,12 @@ class PyLabRobotCreator(DeviceClassCreator[T]):
|
||||
stack = None
|
||||
|
||||
# 递归遍历 children 构建 name_to_uuid 映射
|
||||
def collect_name_to_uuid(children_list: List[ResourceDictInstance], result: Dict[str, str]):
|
||||
def collect_name_to_uuid(children_dict: Dict[str, Any], result: Dict[str, str]):
|
||||
"""递归遍历嵌套的 children 字典,收集 name 到 uuid 的映射"""
|
||||
for child in children_list:
|
||||
if isinstance(child, ResourceDictInstance):
|
||||
result[child.res_content.name] = child.res_content.uuid
|
||||
collect_name_to_uuid(child.children, result)
|
||||
for child in children_dict.values():
|
||||
if isinstance(child, dict):
|
||||
result[child["name"]] = child["uuid"]
|
||||
collect_name_to_uuid(child["children"], result)
|
||||
|
||||
name_to_uuid = {}
|
||||
collect_name_to_uuid(self.children, name_to_uuid)
|
||||
@@ -320,7 +313,7 @@ class WorkstationNodeCreator(DeviceClassCreator[T]):
|
||||
这个类提供了针对WorkstationNode设备类的实例创建方法,处理children参数。
|
||||
"""
|
||||
|
||||
def __init__(self, cls: Type[T], children: List[ResourceDictInstance], resource_tracker: DeviceNodeResourceTracker):
|
||||
def __init__(self, cls: Type[T], children: Dict[str, Any], resource_tracker: DeviceNodeResourceTracker):
|
||||
"""
|
||||
初始化WorkstationNode设备类创建器
|
||||
|
||||
@@ -343,9 +336,9 @@ class WorkstationNodeCreator(DeviceClassCreator[T]):
|
||||
try:
|
||||
# 创建实例,额外补充一个给protocol node的字段,后面考虑取消
|
||||
data["children"] = self.children
|
||||
for child in self.children:
|
||||
if child.res_content.type != "device":
|
||||
self.resource_tracker.add_resource(child.get_plr_nested_dict())
|
||||
for material_id, child in self.children.items():
|
||||
if child["type"] != "device":
|
||||
self.resource_tracker.add_resource(self.children[material_id])
|
||||
deck_dict = data.get("deck")
|
||||
if deck_dict:
|
||||
from pylabrobot.resources import Deck, Resource
|
||||
|
||||
@@ -9,7 +9,7 @@ if str(ROOT_DIR) not in sys.path:
|
||||
|
||||
import pytest
|
||||
|
||||
from unilabos.workflow.common import build_protocol_graph, draw_protocol_graph, draw_protocol_graph_with_ports
|
||||
from scripts.workflow import build_protocol_graph, draw_protocol_graph, draw_protocol_graph_with_ports
|
||||
|
||||
|
||||
ROOT_DIR = Path(__file__).resolve().parents[2]
|
||||
|
||||
@@ -5,7 +5,6 @@
|
||||
|
||||
import argparse
|
||||
import importlib
|
||||
import locale
|
||||
import subprocess
|
||||
import sys
|
||||
|
||||
@@ -23,33 +22,13 @@ class EnvironmentChecker:
|
||||
"websockets": "websockets",
|
||||
"msgcenterpy": "msgcenterpy",
|
||||
"opentrons_shared_data": "opentrons_shared_data",
|
||||
"typing_extensions": "typing_extensions",
|
||||
}
|
||||
|
||||
# 特殊安装包(需要特殊处理的包)
|
||||
self.special_packages = {"pylabrobot": "git+https://github.com/Xuwznln/pylabrobot.git"}
|
||||
|
||||
# 包版本要求(包名: 最低版本)
|
||||
self.version_requirements = {
|
||||
"msgcenterpy": "0.1.5", # msgcenterpy 最低版本要求
|
||||
}
|
||||
|
||||
self.missing_packages = []
|
||||
self.failed_installs = []
|
||||
self.packages_need_upgrade = []
|
||||
|
||||
# 检测系统语言
|
||||
self.is_chinese = self._is_chinese_locale()
|
||||
|
||||
def _is_chinese_locale(self) -> bool:
|
||||
"""检测系统是否为中文环境"""
|
||||
try:
|
||||
lang = locale.getdefaultlocale()[0]
|
||||
if lang and ("zh" in lang.lower() or "chinese" in lang.lower()):
|
||||
return True
|
||||
except Exception:
|
||||
pass
|
||||
return False
|
||||
|
||||
def check_package_installed(self, package_name: str) -> bool:
|
||||
"""检查包是否已安装"""
|
||||
@@ -59,74 +38,31 @@ class EnvironmentChecker:
|
||||
except ImportError:
|
||||
return False
|
||||
|
||||
def get_package_version(self, package_name: str) -> str | None:
|
||||
"""获取已安装包的版本"""
|
||||
try:
|
||||
module = importlib.import_module(package_name)
|
||||
return getattr(module, "__version__", None)
|
||||
except (ImportError, AttributeError):
|
||||
return None
|
||||
|
||||
def compare_version(self, current: str, required: str) -> bool:
|
||||
"""
|
||||
比较版本号
|
||||
Returns:
|
||||
True: current >= required
|
||||
False: current < required
|
||||
"""
|
||||
try:
|
||||
current_parts = [int(x) for x in current.split(".")]
|
||||
required_parts = [int(x) for x in required.split(".")]
|
||||
|
||||
# 补齐长度
|
||||
max_len = max(len(current_parts), len(required_parts))
|
||||
current_parts.extend([0] * (max_len - len(current_parts)))
|
||||
required_parts.extend([0] * (max_len - len(required_parts)))
|
||||
|
||||
return current_parts >= required_parts
|
||||
except Exception:
|
||||
return True # 如果无法比较,假设版本满足要求
|
||||
|
||||
def install_package(self, package_name: str, pip_name: str, upgrade: bool = False) -> bool:
|
||||
def install_package(self, package_name: str, pip_name: str) -> bool:
|
||||
"""安装包"""
|
||||
try:
|
||||
action = "升级" if upgrade else "安装"
|
||||
print_status(f"正在{action} {package_name} ({pip_name})...", "info")
|
||||
print_status(f"正在安装 {package_name} ({pip_name})...", "info")
|
||||
|
||||
# 构建安装命令
|
||||
cmd = [sys.executable, "-m", "pip", "install"]
|
||||
|
||||
# 如果是升级操作,添加 --upgrade 参数
|
||||
if upgrade:
|
||||
cmd.append("--upgrade")
|
||||
|
||||
cmd.append(pip_name)
|
||||
|
||||
# 如果是中文环境,使用清华镜像源
|
||||
if self.is_chinese:
|
||||
cmd.extend(["-i", "https://mirrors.tuna.tsinghua.edu.cn/pypi/web/simple"])
|
||||
cmd = [sys.executable, "-m", "pip", "install", pip_name]
|
||||
|
||||
# 执行安装
|
||||
result = subprocess.run(cmd, capture_output=True, text=True, timeout=300) # 5分钟超时
|
||||
|
||||
if result.returncode == 0:
|
||||
print_status(f"✓ {package_name} {action}成功", "success")
|
||||
print_status(f"✓ {package_name} 安装成功", "success")
|
||||
return True
|
||||
else:
|
||||
print_status(f"× {package_name} {action}失败: {result.stderr}", "error")
|
||||
print_status(f"× {package_name} 安装失败: {result.stderr}", "error")
|
||||
return False
|
||||
|
||||
except subprocess.TimeoutExpired:
|
||||
print_status(f"× {package_name} {action}超时", "error")
|
||||
print_status(f"× {package_name} 安装超时", "error")
|
||||
return False
|
||||
except Exception as e:
|
||||
print_status(f"× {package_name} {action}异常: {str(e)}", "error")
|
||||
print_status(f"× {package_name} 安装异常: {str(e)}", "error")
|
||||
return False
|
||||
|
||||
def upgrade_package(self, package_name: str, pip_name: str) -> bool:
|
||||
"""升级包"""
|
||||
return self.install_package(package_name, pip_name, upgrade=True)
|
||||
|
||||
def check_all_packages(self) -> bool:
|
||||
"""检查所有必需的包"""
|
||||
print_status("开始检查环境依赖...", "info")
|
||||
@@ -135,116 +71,60 @@ class EnvironmentChecker:
|
||||
for import_name, pip_name in self.required_packages.items():
|
||||
if not self.check_package_installed(import_name):
|
||||
self.missing_packages.append((import_name, pip_name))
|
||||
else:
|
||||
# 检查版本要求
|
||||
if import_name in self.version_requirements:
|
||||
current_version = self.get_package_version(import_name)
|
||||
required_version = self.version_requirements[import_name]
|
||||
|
||||
if current_version:
|
||||
if not self.compare_version(current_version, required_version):
|
||||
print_status(
|
||||
f"{import_name} 版本过低 (当前: {current_version}, 需要: >={required_version})",
|
||||
"warning",
|
||||
)
|
||||
self.packages_need_upgrade.append((import_name, pip_name))
|
||||
|
||||
# 检查特殊包
|
||||
for package_name, install_url in self.special_packages.items():
|
||||
if not self.check_package_installed(package_name):
|
||||
self.missing_packages.append((package_name, install_url))
|
||||
|
||||
all_ok = not self.missing_packages and not self.packages_need_upgrade
|
||||
|
||||
if all_ok:
|
||||
if not self.missing_packages:
|
||||
print_status("✓ 所有依赖包检查完成,环境正常", "success")
|
||||
return True
|
||||
|
||||
if self.missing_packages:
|
||||
print_status(f"发现 {len(self.missing_packages)} 个缺失的包", "warning")
|
||||
if self.packages_need_upgrade:
|
||||
print_status(f"发现 {len(self.packages_need_upgrade)} 个需要升级的包", "warning")
|
||||
|
||||
print_status(f"发现 {len(self.missing_packages)} 个缺失的包", "warning")
|
||||
return False
|
||||
|
||||
def install_missing_packages(self, auto_install: bool = True) -> bool:
|
||||
"""安装缺失的包"""
|
||||
if not self.missing_packages and not self.packages_need_upgrade:
|
||||
if not self.missing_packages:
|
||||
return True
|
||||
|
||||
if not auto_install:
|
||||
if self.missing_packages:
|
||||
print_status("缺失以下包:", "warning")
|
||||
for import_name, pip_name in self.missing_packages:
|
||||
print_status(f" - {import_name} (pip install {pip_name})", "warning")
|
||||
if self.packages_need_upgrade:
|
||||
print_status("需要升级以下包:", "warning")
|
||||
for import_name, pip_name in self.packages_need_upgrade:
|
||||
print_status(f" - {import_name} (pip install --upgrade {pip_name})", "warning")
|
||||
print_status("缺失以下包:", "warning")
|
||||
for import_name, pip_name in self.missing_packages:
|
||||
print_status(f" - {import_name} (pip install {pip_name})", "warning")
|
||||
return False
|
||||
|
||||
# 安装缺失的包
|
||||
if self.missing_packages:
|
||||
print_status(f"开始自动安装 {len(self.missing_packages)} 个缺失的包...", "info")
|
||||
print_status(f"开始自动安装 {len(self.missing_packages)} 个缺失的包...", "info")
|
||||
|
||||
success_count = 0
|
||||
for import_name, pip_name in self.missing_packages:
|
||||
if self.install_package(import_name, pip_name):
|
||||
success_count += 1
|
||||
else:
|
||||
self.failed_installs.append((import_name, pip_name))
|
||||
|
||||
print_status(f"✓ 成功安装 {success_count}/{len(self.missing_packages)} 个包", "success")
|
||||
|
||||
# 升级需要更新的包
|
||||
if self.packages_need_upgrade:
|
||||
print_status(f"开始自动升级 {len(self.packages_need_upgrade)} 个包...", "info")
|
||||
|
||||
upgrade_success_count = 0
|
||||
for import_name, pip_name in self.packages_need_upgrade:
|
||||
if self.upgrade_package(import_name, pip_name):
|
||||
upgrade_success_count += 1
|
||||
else:
|
||||
self.failed_installs.append((import_name, pip_name))
|
||||
|
||||
print_status(f"✓ 成功升级 {upgrade_success_count}/{len(self.packages_need_upgrade)} 个包", "success")
|
||||
success_count = 0
|
||||
for import_name, pip_name in self.missing_packages:
|
||||
if self.install_package(import_name, pip_name):
|
||||
success_count += 1
|
||||
else:
|
||||
self.failed_installs.append((import_name, pip_name))
|
||||
|
||||
if self.failed_installs:
|
||||
print_status(f"有 {len(self.failed_installs)} 个包操作失败:", "error")
|
||||
print_status(f"有 {len(self.failed_installs)} 个包安装失败:", "error")
|
||||
for import_name, pip_name in self.failed_installs:
|
||||
print_status(f" - {import_name} ({pip_name})", "error")
|
||||
print_status(f" - {import_name} (pip install {pip_name})", "error")
|
||||
return False
|
||||
|
||||
print_status(f"✓ 成功安装 {success_count} 个包", "success")
|
||||
return True
|
||||
|
||||
def verify_installation(self) -> bool:
|
||||
"""验证安装结果"""
|
||||
if not self.missing_packages and not self.packages_need_upgrade:
|
||||
if not self.missing_packages:
|
||||
return True
|
||||
|
||||
print_status("验证安装结果...", "info")
|
||||
|
||||
failed_verification = []
|
||||
|
||||
# 验证新安装的包
|
||||
for import_name, pip_name in self.missing_packages:
|
||||
if not self.check_package_installed(import_name):
|
||||
failed_verification.append((import_name, pip_name))
|
||||
|
||||
# 验证升级的包
|
||||
for import_name, pip_name in self.packages_need_upgrade:
|
||||
if not self.check_package_installed(import_name):
|
||||
failed_verification.append((import_name, pip_name))
|
||||
elif import_name in self.version_requirements:
|
||||
current_version = self.get_package_version(import_name)
|
||||
required_version = self.version_requirements[import_name]
|
||||
if current_version and not self.compare_version(current_version, required_version):
|
||||
failed_verification.append((import_name, pip_name))
|
||||
print_status(
|
||||
f" {import_name} 版本仍然过低 (当前: {current_version}, 需要: >={required_version})",
|
||||
"error",
|
||||
)
|
||||
|
||||
if failed_verification:
|
||||
print_status(f"有 {len(failed_verification)} 个包验证失败:", "error")
|
||||
for import_name, pip_name in failed_verification:
|
||||
|
||||
@@ -239,12 +239,8 @@ class ImportManager:
|
||||
cls = get_class(class_path)
|
||||
class_name = cls.__name__
|
||||
|
||||
result = {
|
||||
"class_name": class_name,
|
||||
"init_params": self._analyze_method_signature(cls.__init__)["args"],
|
||||
"status_methods": {},
|
||||
"action_methods": {},
|
||||
}
|
||||
result = {"class_name": class_name, "init_params": self._analyze_method_signature(cls.__init__)["args"],
|
||||
"status_methods": {}, "action_methods": {}}
|
||||
# 分析类的所有成员
|
||||
for name, method in cls.__dict__.items():
|
||||
if name.startswith("_"):
|
||||
@@ -378,7 +374,6 @@ class ImportManager:
|
||||
"name": method.__name__,
|
||||
"args": args,
|
||||
"return_type": self._get_type_string(signature.return_annotation),
|
||||
"return_annotation": signature.return_annotation, # 保留原始类型注解,用于TypedDict等特殊处理
|
||||
"is_async": inspect.iscoroutinefunction(method),
|
||||
}
|
||||
|
||||
|
||||
@@ -124,14 +124,11 @@ class ColoredFormatter(logging.Formatter):
|
||||
def _format_basic(self, record):
|
||||
"""基本格式化,不包含颜色"""
|
||||
datetime_str = datetime.fromtimestamp(record.created).strftime("%y-%m-%d [%H:%M:%S,%f")[:-3] + "]"
|
||||
filename = record.filename.replace(".py", "").split("\\")[-1] # 提取文件名(不含路径和扩展名)
|
||||
if "/" in filename:
|
||||
filename = filename.split("/")[-1]
|
||||
filename = os.path.basename(record.filename).rsplit(".", 1)[0] # 提取文件名(不含路径和扩展名)
|
||||
module_path = f"{record.name}.{filename}"
|
||||
func_line = f"{record.funcName}:{record.lineno}"
|
||||
right_info = f" [{func_line}] [{module_path}]"
|
||||
|
||||
formatted_message = f"{datetime_str} [{record.levelname}] {record.getMessage()}{right_info}"
|
||||
formatted_message = f"{datetime_str} [{record.levelname}] [{module_path}] [{func_line}]: {record.getMessage()}"
|
||||
|
||||
if record.exc_info:
|
||||
exc_text = self.formatException(record.exc_info)
|
||||
@@ -153,7 +150,7 @@ class ColoredFormatter(logging.Formatter):
|
||||
|
||||
|
||||
# 配置日志处理器
|
||||
def configure_logger(loglevel=None, working_dir=None):
|
||||
def configure_logger(loglevel=None):
|
||||
"""配置日志记录器
|
||||
|
||||
Args:
|
||||
@@ -162,9 +159,8 @@ def configure_logger(loglevel=None, working_dir=None):
|
||||
"""
|
||||
# 获取根日志记录器
|
||||
root_logger = logging.getLogger()
|
||||
root_logger.setLevel(TRACE_LEVEL)
|
||||
|
||||
# 设置日志级别
|
||||
numeric_level = logging.DEBUG
|
||||
if loglevel is not None:
|
||||
if isinstance(loglevel, str):
|
||||
# 将字符串转换为logging级别
|
||||
@@ -174,8 +170,12 @@ def configure_logger(loglevel=None, working_dir=None):
|
||||
numeric_level = getattr(logging, loglevel.upper(), None)
|
||||
if not isinstance(numeric_level, int):
|
||||
print(f"警告: 无效的日志级别 '{loglevel}',使用默认级别 DEBUG")
|
||||
numeric_level = logging.DEBUG
|
||||
else:
|
||||
numeric_level = loglevel
|
||||
root_logger.setLevel(numeric_level)
|
||||
else:
|
||||
root_logger.setLevel(logging.DEBUG) # 默认级别
|
||||
|
||||
# 移除已存在的处理器
|
||||
for handler in root_logger.handlers[:]:
|
||||
@@ -183,7 +183,7 @@ def configure_logger(loglevel=None, working_dir=None):
|
||||
|
||||
# 创建控制台处理器
|
||||
console_handler = logging.StreamHandler()
|
||||
console_handler.setLevel(numeric_level) # 使用与根记录器相同的级别
|
||||
console_handler.setLevel(root_logger.level) # 使用与根记录器相同的级别
|
||||
|
||||
# 使用自定义的颜色格式化器
|
||||
color_formatter = ColoredFormatter()
|
||||
@@ -191,30 +191,9 @@ def configure_logger(loglevel=None, working_dir=None):
|
||||
|
||||
# 添加处理器到根日志记录器
|
||||
root_logger.addHandler(console_handler)
|
||||
|
||||
# 如果指定了工作目录,添加文件处理器
|
||||
if working_dir is not None:
|
||||
logs_dir = os.path.join(working_dir, "logs")
|
||||
os.makedirs(logs_dir, exist_ok=True)
|
||||
|
||||
# 生成日志文件名:日期 时间.log
|
||||
log_filename = datetime.now().strftime("%Y-%m-%d %H-%M-%S") + ".log"
|
||||
log_filepath = os.path.join(logs_dir, log_filename)
|
||||
|
||||
# 创建文件处理器
|
||||
file_handler = logging.FileHandler(log_filepath, encoding="utf-8")
|
||||
file_handler.setLevel(TRACE_LEVEL)
|
||||
|
||||
# 使用不带颜色的格式化器
|
||||
file_formatter = ColoredFormatter(use_colors=False)
|
||||
file_handler.setFormatter(file_formatter)
|
||||
|
||||
root_logger.addHandler(file_handler)
|
||||
|
||||
logging.getLogger("asyncio").setLevel(logging.INFO)
|
||||
logging.getLogger("urllib3").setLevel(logging.INFO)
|
||||
|
||||
|
||||
# 配置日志系统
|
||||
configure_logger()
|
||||
|
||||
|
||||
@@ -1,484 +0,0 @@
|
||||
import re
|
||||
import uuid
|
||||
|
||||
import networkx as nx
|
||||
from networkx.drawing.nx_agraph import to_agraph
|
||||
import matplotlib.pyplot as plt
|
||||
from typing import Dict, List, Any, Tuple, Optional
|
||||
|
||||
Json = Dict[str, Any]
|
||||
|
||||
# ---------------- Graph ----------------
|
||||
|
||||
class WorkflowGraph:
|
||||
"""简单的有向图实现:使用 params 单层参数;inputs 内含连线;支持 node-link 导出"""
|
||||
|
||||
def __init__(self):
|
||||
self.nodes: Dict[str, Dict[str, Any]] = {}
|
||||
self.edges: List[Dict[str, Any]] = []
|
||||
|
||||
def add_node(self, node_id: str, **attrs):
|
||||
self.nodes[node_id] = attrs
|
||||
|
||||
def add_edge(self, source: str, target: str, **attrs):
|
||||
edge = {
|
||||
"source": source,
|
||||
"target": target,
|
||||
"source_node_uuid": source,
|
||||
"target_node_uuid": target,
|
||||
"source_handle_io": attrs.pop("source_handle_io", "source"),
|
||||
"target_handle_io": attrs.pop("target_handle_io", "target"),
|
||||
**attrs
|
||||
}
|
||||
self.edges.append(edge)
|
||||
|
||||
def _materialize_wiring_into_inputs(self, obj: Any, inputs: Dict[str, Any],
|
||||
variable_sources: Dict[str, Dict[str, Any]],
|
||||
target_node_id: str, base_path: List[str]):
|
||||
has_var = False
|
||||
|
||||
def walk(node: Any, path: List[str]):
|
||||
nonlocal has_var
|
||||
if isinstance(node, dict):
|
||||
if "__var__" in node:
|
||||
has_var = True
|
||||
varname = node["__var__"]
|
||||
placeholder = f"${{{varname}}}"
|
||||
src = variable_sources.get(varname)
|
||||
if src:
|
||||
key = ".".join(path) # e.g. "params.foo.bar.0"
|
||||
inputs[key] = {"node": src["node_id"], "output": src.get("output_name", "result")}
|
||||
self.add_edge(str(src["node_id"]), target_node_id,
|
||||
source_handle_io=src.get("output_name", "result"),
|
||||
target_handle_io=key)
|
||||
return placeholder
|
||||
return {k: walk(v, path + [k]) for k, v in node.items()}
|
||||
if isinstance(node, list):
|
||||
return [walk(v, path + [str(i)]) for i, v in enumerate(node)]
|
||||
return node
|
||||
|
||||
replaced = walk(obj, base_path[:])
|
||||
return replaced, has_var
|
||||
|
||||
def add_workflow_node(self,
|
||||
node_id: int,
|
||||
*,
|
||||
device_key: Optional[str] = None, # 实例名,如 "ser"
|
||||
resource_name: Optional[str] = None, # registry key(原 device_class)
|
||||
module: Optional[str] = None,
|
||||
template_name: Optional[str] = None, # 动作/模板名(原 action_key)
|
||||
params: Dict[str, Any],
|
||||
variable_sources: Dict[str, Dict[str, Any]],
|
||||
add_ready_if_no_vars: bool = True,
|
||||
prev_node_id: Optional[int] = None,
|
||||
**extra_attrs) -> None:
|
||||
"""添加工作流节点:params 单层;自动变量连线与 ready 串联;支持附加属性"""
|
||||
node_id_str = str(node_id)
|
||||
inputs: Dict[str, Any] = {}
|
||||
|
||||
params, has_var = self._materialize_wiring_into_inputs(
|
||||
params, inputs, variable_sources, node_id_str, base_path=["params"]
|
||||
)
|
||||
|
||||
if add_ready_if_no_vars and not has_var:
|
||||
last_id = str(prev_node_id) if prev_node_id is not None else "-1"
|
||||
inputs["ready"] = {"node": int(last_id), "output": "ready"}
|
||||
self.add_edge(last_id, node_id_str, source_handle_io="ready", target_handle_io="ready")
|
||||
|
||||
node_obj = {
|
||||
"device_key": device_key,
|
||||
"resource_name": resource_name, # ✅ 新名字
|
||||
"module": module,
|
||||
"template_name": template_name, # ✅ 新名字
|
||||
"params": params,
|
||||
"inputs": inputs,
|
||||
}
|
||||
node_obj.update(extra_attrs or {})
|
||||
self.add_node(node_id_str, parameters=node_obj)
|
||||
|
||||
# 顺序工作流导出(连线在 inputs,不返回 edges)
|
||||
def to_dict(self) -> List[Dict[str, Any]]:
|
||||
result = []
|
||||
for node_id, attrs in self.nodes.items():
|
||||
node = {"id": node_id}
|
||||
params = dict(attrs.get("parameters", {}) or {})
|
||||
flat = {k: v for k, v in attrs.items() if k != "parameters"}
|
||||
flat.update(params)
|
||||
node.update(flat)
|
||||
result.append(node)
|
||||
return sorted(result, key=lambda n: int(n["id"]) if str(n["id"]).isdigit() else n["id"])
|
||||
|
||||
# node-link 导出(含 edges)
|
||||
def to_node_link_dict(self) -> Dict[str, Any]:
|
||||
nodes_list = []
|
||||
for node_id, attrs in self.nodes.items():
|
||||
node_attrs = attrs.copy()
|
||||
params = node_attrs.pop("parameters", {}) or {}
|
||||
node_attrs.update(params)
|
||||
nodes_list.append({"id": node_id, **node_attrs})
|
||||
return {"directed": True, "multigraph": False, "graph": {}, "nodes": nodes_list, "edges": self.edges, "links": self.edges}
|
||||
|
||||
|
||||
def refactor_data(data: List[Dict[str, Any]]) -> List[Dict[str, Any]]:
|
||||
"""统一的数据重构函数,根据操作类型自动选择模板"""
|
||||
refactored_data = []
|
||||
|
||||
# 定义操作映射,包含生物实验和有机化学的所有操作
|
||||
OPERATION_MAPPING = {
|
||||
# 生物实验操作
|
||||
"transfer_liquid": "transfer_liquid",
|
||||
"transfer": "transfer",
|
||||
"incubation": "incubation",
|
||||
"move_labware": "move_labware",
|
||||
"oscillation": "oscillation",
|
||||
# 有机化学操作
|
||||
"HeatChillToTemp": "HeatChillProtocol",
|
||||
"StopHeatChill": "HeatChillStopProtocol",
|
||||
"StartHeatChill": "HeatChillStartProtocol",
|
||||
"HeatChill": "HeatChillProtocol",
|
||||
"Dissolve": "DissolveProtocol",
|
||||
"Transfer": "TransferProtocol",
|
||||
"Evaporate": "EvaporateProtocol",
|
||||
"Recrystallize": "RecrystallizeProtocol",
|
||||
"Filter": "FilterProtocol",
|
||||
"Dry": "DryProtocol",
|
||||
"Add": "AddProtocol",
|
||||
}
|
||||
|
||||
UNSUPPORTED_OPERATIONS = ["Purge", "Wait", "Stir", "ResetHandling"]
|
||||
|
||||
for step in data:
|
||||
operation = step.get("action")
|
||||
if not operation or operation in UNSUPPORTED_OPERATIONS:
|
||||
continue
|
||||
|
||||
# 处理重复操作
|
||||
if operation == "Repeat":
|
||||
times = step.get("times", step.get("parameters", {}).get("times", 1))
|
||||
sub_steps = step.get("steps", step.get("parameters", {}).get("steps", []))
|
||||
for i in range(int(times)):
|
||||
sub_data = refactor_data(sub_steps)
|
||||
refactored_data.extend(sub_data)
|
||||
continue
|
||||
|
||||
# 获取模板名称
|
||||
template = OPERATION_MAPPING.get(operation)
|
||||
if not template:
|
||||
# 自动推断模板类型
|
||||
if operation.lower() in ["transfer", "incubation", "move_labware", "oscillation"]:
|
||||
template = f"biomek-{operation}"
|
||||
else:
|
||||
template = f"{operation}Protocol"
|
||||
|
||||
# 创建步骤数据
|
||||
step_data = {
|
||||
"template": template,
|
||||
"description": step.get("description", step.get("purpose", f"{operation} operation")),
|
||||
"lab_node_type": "Device",
|
||||
"parameters": step.get("parameters", step.get("action_args", {})),
|
||||
}
|
||||
refactored_data.append(step_data)
|
||||
|
||||
return refactored_data
|
||||
|
||||
|
||||
def build_protocol_graph(
|
||||
labware_info: List[Dict[str, Any]], protocol_steps: List[Dict[str, Any]], workstation_name: str
|
||||
) -> WorkflowGraph:
|
||||
"""统一的协议图构建函数,根据设备类型自动选择构建逻辑"""
|
||||
G = WorkflowGraph()
|
||||
resource_last_writer = {}
|
||||
|
||||
protocol_steps = refactor_data(protocol_steps)
|
||||
# 有机化学&移液站协议图构建
|
||||
WORKSTATION_ID = workstation_name
|
||||
|
||||
# 为所有labware创建资源节点
|
||||
for labware_id, item in labware_info.items():
|
||||
# item_id = item.get("id") or item.get("name", f"item_{uuid.uuid4()}")
|
||||
node_id = str(uuid.uuid4())
|
||||
|
||||
# 判断节点类型
|
||||
if "Rack" in str(labware_id) or "Tip" in str(labware_id):
|
||||
lab_node_type = "Labware"
|
||||
description = f"Prepare Labware: {labware_id}"
|
||||
liquid_type = []
|
||||
liquid_volume = []
|
||||
elif item.get("type") == "hardware" or "reactor" in str(labware_id).lower():
|
||||
if "reactor" not in str(labware_id).lower():
|
||||
continue
|
||||
lab_node_type = "Sample"
|
||||
description = f"Prepare Reactor: {labware_id}"
|
||||
liquid_type = []
|
||||
liquid_volume = []
|
||||
else:
|
||||
lab_node_type = "Reagent"
|
||||
description = f"Add Reagent to Flask: {labware_id}"
|
||||
liquid_type = [labware_id]
|
||||
liquid_volume = [1e5]
|
||||
|
||||
G.add_node(
|
||||
node_id,
|
||||
template_name=f"create_resource",
|
||||
resource_name="host_node",
|
||||
description=description,
|
||||
lab_node_type=lab_node_type,
|
||||
params={
|
||||
"res_id": labware_id,
|
||||
"device_id": WORKSTATION_ID,
|
||||
"class_name": "container",
|
||||
"parent": WORKSTATION_ID,
|
||||
"bind_locations": {"x": 0.0, "y": 0.0, "z": 0.0},
|
||||
"liquid_input_slot": [-1],
|
||||
"liquid_type": liquid_type,
|
||||
"liquid_volume": liquid_volume,
|
||||
"slot_on_deck": "",
|
||||
},
|
||||
role=item.get("role", ""),
|
||||
)
|
||||
resource_last_writer[labware_id] = f"{node_id}:labware"
|
||||
|
||||
last_control_node_id = None
|
||||
|
||||
# 处理协议步骤
|
||||
for step in protocol_steps:
|
||||
node_id = str(uuid.uuid4())
|
||||
G.add_node(node_id, **step)
|
||||
|
||||
# 控制流
|
||||
if last_control_node_id is not None:
|
||||
G.add_edge(last_control_node_id, node_id, source_port="ready", target_port="ready")
|
||||
last_control_node_id = node_id
|
||||
|
||||
# 物料流
|
||||
params = step.get("parameters", {})
|
||||
input_resources_possible_names = [
|
||||
"vessel",
|
||||
"to_vessel",
|
||||
"from_vessel",
|
||||
"reagent",
|
||||
"solvent",
|
||||
"compound",
|
||||
"sources",
|
||||
"targets",
|
||||
]
|
||||
|
||||
for target_port in input_resources_possible_names:
|
||||
resource_name = params.get(target_port)
|
||||
if resource_name and resource_name in resource_last_writer:
|
||||
source_node, source_port = resource_last_writer[resource_name].split(":")
|
||||
G.add_edge(source_node, node_id, source_port=source_port, target_port=target_port)
|
||||
|
||||
output_resources = {
|
||||
"vessel_out": params.get("vessel"),
|
||||
"from_vessel_out": params.get("from_vessel"),
|
||||
"to_vessel_out": params.get("to_vessel"),
|
||||
"filtrate_out": params.get("filtrate_vessel"),
|
||||
"reagent": params.get("reagent"),
|
||||
"solvent": params.get("solvent"),
|
||||
"compound": params.get("compound"),
|
||||
"sources_out": params.get("sources"),
|
||||
"targets_out": params.get("targets"),
|
||||
}
|
||||
|
||||
for source_port, resource_name in output_resources.items():
|
||||
if resource_name:
|
||||
resource_last_writer[resource_name] = f"{node_id}:{source_port}"
|
||||
|
||||
return G
|
||||
|
||||
|
||||
def draw_protocol_graph(protocol_graph: WorkflowGraph, output_path: str):
|
||||
"""
|
||||
(辅助功能) 使用 networkx 和 matplotlib 绘制协议工作流图,用于可视化。
|
||||
"""
|
||||
if not protocol_graph:
|
||||
print("Cannot draw graph: Graph object is empty.")
|
||||
return
|
||||
|
||||
G = nx.DiGraph()
|
||||
|
||||
for node_id, attrs in protocol_graph.nodes.items():
|
||||
label = attrs.get("description", attrs.get("template", node_id[:8]))
|
||||
G.add_node(node_id, label=label, **attrs)
|
||||
|
||||
for edge in protocol_graph.edges:
|
||||
G.add_edge(edge["source"], edge["target"])
|
||||
|
||||
plt.figure(figsize=(20, 15))
|
||||
try:
|
||||
pos = nx.nx_agraph.graphviz_layout(G, prog="dot")
|
||||
except Exception:
|
||||
pos = nx.shell_layout(G) # Fallback layout
|
||||
|
||||
node_labels = {node: data["label"] for node, data in G.nodes(data=True)}
|
||||
nx.draw(
|
||||
G,
|
||||
pos,
|
||||
with_labels=False,
|
||||
node_size=2500,
|
||||
node_color="skyblue",
|
||||
node_shape="o",
|
||||
edge_color="gray",
|
||||
width=1.5,
|
||||
arrowsize=15,
|
||||
)
|
||||
nx.draw_networkx_labels(G, pos, labels=node_labels, font_size=8, font_weight="bold")
|
||||
|
||||
plt.title("Chemical Protocol Workflow Graph", size=15)
|
||||
plt.savefig(output_path, dpi=300, bbox_inches="tight")
|
||||
plt.close()
|
||||
print(f" - Visualization saved to '{output_path}'")
|
||||
|
||||
|
||||
COMPASS = {"n","e","s","w","ne","nw","se","sw","c"}
|
||||
|
||||
def _is_compass(port: str) -> bool:
|
||||
return isinstance(port, str) and port.lower() in COMPASS
|
||||
|
||||
def draw_protocol_graph_with_ports(protocol_graph, output_path: str, rankdir: str = "LR"):
|
||||
"""
|
||||
使用 Graphviz 端口语法绘制协议工作流图。
|
||||
- 若边上的 source_port/target_port 是 compass(n/e/s/w/...),直接用 compass。
|
||||
- 否则自动为节点创建 record 形状并定义命名端口 <portname>。
|
||||
最终由 PyGraphviz 渲染并输出到 output_path(后缀决定格式,如 .png/.svg/.pdf)。
|
||||
"""
|
||||
if not protocol_graph:
|
||||
print("Cannot draw graph: Graph object is empty.")
|
||||
return
|
||||
|
||||
# 1) 先用 networkx 搭建有向图,保留端口属性
|
||||
G = nx.DiGraph()
|
||||
for node_id, attrs in protocol_graph.nodes.items():
|
||||
label = attrs.get("description", attrs.get("template", node_id[:8]))
|
||||
# 保留一个干净的“中心标签”,用于放在 record 的中间槽
|
||||
G.add_node(node_id, _core_label=str(label), **{k:v for k,v in attrs.items() if k not in ("label",)})
|
||||
|
||||
edges_data = []
|
||||
in_ports_by_node = {} # 收集命名输入端口
|
||||
out_ports_by_node = {} # 收集命名输出端口
|
||||
|
||||
for edge in protocol_graph.edges:
|
||||
u = edge["source"]
|
||||
v = edge["target"]
|
||||
sp = edge.get("source_port")
|
||||
tp = edge.get("target_port")
|
||||
|
||||
# 记录到图里(保留原始端口信息)
|
||||
G.add_edge(u, v, source_port=sp, target_port=tp)
|
||||
edges_data.append((u, v, sp, tp))
|
||||
|
||||
# 如果不是 compass,就按“命名端口”先归类,等会儿给节点造 record
|
||||
if sp and not _is_compass(sp):
|
||||
out_ports_by_node.setdefault(u, set()).add(str(sp))
|
||||
if tp and not _is_compass(tp):
|
||||
in_ports_by_node.setdefault(v, set()).add(str(tp))
|
||||
|
||||
# 2) 转为 AGraph,使用 Graphviz 渲染
|
||||
A = to_agraph(G)
|
||||
A.graph_attr.update(rankdir=rankdir, splines="true", concentrate="false", fontsize="10")
|
||||
A.node_attr.update(shape="box", style="rounded,filled", fillcolor="lightyellow", color="#999999", fontname="Helvetica")
|
||||
A.edge_attr.update(arrowsize="0.8", color="#666666")
|
||||
|
||||
# 3) 为需要命名端口的节点设置 record 形状与 label
|
||||
# 左列 = 输入端口;中间 = 核心标签;右列 = 输出端口
|
||||
for n in A.nodes():
|
||||
node = A.get_node(n)
|
||||
core = G.nodes[n].get("_core_label", n)
|
||||
|
||||
in_ports = sorted(in_ports_by_node.get(n, []))
|
||||
out_ports = sorted(out_ports_by_node.get(n, []))
|
||||
|
||||
# 如果该节点涉及命名端口,则用 record;否则保留原 box
|
||||
if in_ports or out_ports:
|
||||
def port_fields(ports):
|
||||
if not ports:
|
||||
return " " # 必须留一个空槽占位
|
||||
# 每个端口一个小格子,<p> name
|
||||
return "|".join(f"<{re.sub(r'[^A-Za-z0-9_:.|-]', '_', p)}> {p}" for p in ports)
|
||||
|
||||
left = port_fields(in_ports)
|
||||
right = port_fields(out_ports)
|
||||
|
||||
# 三栏:左(入) | 中(节点名) | 右(出)
|
||||
record_label = f"{{ {left} | {core} | {right} }}"
|
||||
node.attr.update(shape="record", label=record_label)
|
||||
else:
|
||||
# 没有命名端口:普通盒子,显示核心标签
|
||||
node.attr.update(label=str(core))
|
||||
|
||||
# 4) 给边设置 headport / tailport
|
||||
# - 若端口为 compass:直接用 compass(e.g., headport="e")
|
||||
# - 若端口为命名端口:使用在 record 中定义的 <port> 名(同名即可)
|
||||
for (u, v, sp, tp) in edges_data:
|
||||
e = A.get_edge(u, v)
|
||||
|
||||
# Graphviz 属性:tail 是源,head 是目标
|
||||
if sp:
|
||||
if _is_compass(sp):
|
||||
e.attr["tailport"] = sp.lower()
|
||||
else:
|
||||
# 与 record label 中 <port> 名一致;特殊字符已在 label 中做了清洗
|
||||
e.attr["tailport"] = re.sub(r'[^A-Za-z0-9_:.|-]', '_', str(sp))
|
||||
|
||||
if tp:
|
||||
if _is_compass(tp):
|
||||
e.attr["headport"] = tp.lower()
|
||||
else:
|
||||
e.attr["headport"] = re.sub(r'[^A-Za-z0-9_:.|-]', '_', str(tp))
|
||||
|
||||
# 可选:若想让边更贴边缘,可设置 constraint/spline 等
|
||||
# e.attr["arrowhead"] = "vee"
|
||||
|
||||
# 5) 输出
|
||||
A.draw(output_path, prog="dot")
|
||||
print(f" - Port-aware workflow rendered to '{output_path}'")
|
||||
# ---------------- Registry Adapter ----------------
|
||||
|
||||
|
||||
class RegistryAdapter:
|
||||
"""根据 module 的类名(冒号右侧)反查 registry 的 resource_name(原 device_class),并抽取参数顺序"""
|
||||
def __init__(self, device_registry: Dict[str, Any]):
|
||||
self.device_registry = device_registry or {}
|
||||
self.module_class_to_resource = self._build_module_class_index()
|
||||
|
||||
def _build_module_class_index(self) -> Dict[str, str]:
|
||||
idx = {}
|
||||
for resource_name, info in self.device_registry.items():
|
||||
module = info.get("module")
|
||||
if isinstance(module, str) and ":" in module:
|
||||
cls = module.split(":")[-1]
|
||||
idx[cls] = resource_name
|
||||
idx[cls.lower()] = resource_name
|
||||
return idx
|
||||
|
||||
def resolve_resource_by_classname(self, class_name: str) -> Optional[str]:
|
||||
if not class_name:
|
||||
return None
|
||||
return (self.module_class_to_resource.get(class_name)
|
||||
or self.module_class_to_resource.get(class_name.lower()))
|
||||
|
||||
def get_device_module(self, resource_name: Optional[str]) -> Optional[str]:
|
||||
if not resource_name:
|
||||
return None
|
||||
return self.device_registry.get(resource_name, {}).get("module")
|
||||
|
||||
def get_actions(self, resource_name: Optional[str]) -> Dict[str, Any]:
|
||||
if not resource_name:
|
||||
return {}
|
||||
return (self.device_registry.get(resource_name, {})
|
||||
.get("class", {})
|
||||
.get("action_value_mappings", {})) or {}
|
||||
|
||||
def get_action_schema(self, resource_name: Optional[str], template_name: str) -> Optional[Json]:
|
||||
return (self.get_actions(resource_name).get(template_name) or {}).get("schema")
|
||||
|
||||
def get_action_goal_default(self, resource_name: Optional[str], template_name: str) -> Json:
|
||||
return (self.get_actions(resource_name).get(template_name) or {}).get("goal_default", {}) or {}
|
||||
|
||||
def get_action_input_keys(self, resource_name: Optional[str], template_name: str) -> List[str]:
|
||||
schema = self.get_action_schema(resource_name, template_name) or {}
|
||||
goal = (schema.get("properties") or {}).get("goal") or {}
|
||||
props = goal.get("properties") or {}
|
||||
required = goal.get("required") or []
|
||||
return list(dict.fromkeys(required + list(props.keys())))
|
||||
@@ -1,24 +0,0 @@
|
||||
import json
|
||||
from os import PathLike
|
||||
|
||||
from unilabos.workflow.common import build_protocol_graph
|
||||
|
||||
|
||||
def from_labwares_and_steps(data_path: PathLike):
|
||||
with data_path.open("r", encoding="utf-8") as fp:
|
||||
d = json.load(fp)
|
||||
|
||||
if "workflow" in d and "reagent" in d:
|
||||
protocol_steps = d["workflow"]
|
||||
labware_info = d["reagent"]
|
||||
elif "steps_info" in d and "labware_info" in d:
|
||||
protocol_steps = _normalize_steps(d["steps_info"])
|
||||
labware_info = _normalize_labware(d["labware_info"])
|
||||
else:
|
||||
raise ValueError("Unsupported protocol format")
|
||||
|
||||
graph = build_protocol_graph(
|
||||
labware_info=labware_info,
|
||||
protocol_steps=protocol_steps,
|
||||
workstation_name="PRCXi",
|
||||
)
|
||||
@@ -1,241 +0,0 @@
|
||||
import ast
|
||||
import json
|
||||
from typing import Dict, List, Any, Tuple, Optional
|
||||
|
||||
from .common import WorkflowGraph, RegistryAdapter
|
||||
|
||||
Json = Dict[str, Any]
|
||||
|
||||
# ---------------- Converter ----------------
|
||||
|
||||
class DeviceMethodConverter:
|
||||
"""
|
||||
- 字段统一:resource_name(原 device_class)、template_name(原 action_key)
|
||||
- params 单层;inputs 使用 'params.' 前缀
|
||||
- SimpleGraph.add_workflow_node 负责变量连线与边
|
||||
"""
|
||||
def __init__(self, device_registry: Optional[Dict[str, Any]] = None):
|
||||
self.graph = WorkflowGraph()
|
||||
self.variable_sources: Dict[str, Dict[str, Any]] = {} # var -> {node_id, output_name}
|
||||
self.instance_to_resource: Dict[str, Optional[str]] = {} # 实例名 -> resource_name
|
||||
self.node_id_counter: int = 0
|
||||
self.registry = RegistryAdapter(device_registry or {})
|
||||
|
||||
# ---- helpers ----
|
||||
def _new_node_id(self) -> int:
|
||||
nid = self.node_id_counter
|
||||
self.node_id_counter += 1
|
||||
return nid
|
||||
|
||||
def _assign_targets(self, targets) -> List[str]:
|
||||
names: List[str] = []
|
||||
import ast
|
||||
if isinstance(targets, ast.Tuple):
|
||||
for elt in targets.elts:
|
||||
if isinstance(elt, ast.Name):
|
||||
names.append(elt.id)
|
||||
elif isinstance(targets, ast.Name):
|
||||
names.append(targets.id)
|
||||
return names
|
||||
|
||||
def _extract_device_instantiation(self, node) -> Optional[Tuple[str, str]]:
|
||||
import ast
|
||||
if not isinstance(node.value, ast.Call):
|
||||
return None
|
||||
callee = node.value.func
|
||||
if isinstance(callee, ast.Name):
|
||||
class_name = callee.id
|
||||
elif isinstance(callee, ast.Attribute) and isinstance(callee.value, ast.Name):
|
||||
class_name = callee.attr
|
||||
else:
|
||||
return None
|
||||
if isinstance(node.targets[0], ast.Name):
|
||||
instance = node.targets[0].id
|
||||
return instance, class_name
|
||||
return None
|
||||
|
||||
def _extract_call(self, call) -> Tuple[str, str, Dict[str, Any], str]:
|
||||
import ast
|
||||
owner_name, method_name, call_kind = "", "", "func"
|
||||
if isinstance(call.func, ast.Attribute):
|
||||
method_name = call.func.attr
|
||||
if isinstance(call.func.value, ast.Name):
|
||||
owner_name = call.func.value.id
|
||||
call_kind = "instance" if owner_name in self.instance_to_resource else "class_or_module"
|
||||
elif isinstance(call.func.value, ast.Attribute) and isinstance(call.func.value.value, ast.Name):
|
||||
owner_name = call.func.value.attr
|
||||
call_kind = "class_or_module"
|
||||
elif isinstance(call.func, ast.Name):
|
||||
method_name = call.func.id
|
||||
call_kind = "func"
|
||||
|
||||
def pack(node):
|
||||
if isinstance(node, ast.Name):
|
||||
return {"type": "variable", "value": node.id}
|
||||
if isinstance(node, ast.Constant):
|
||||
return {"type": "constant", "value": node.value}
|
||||
if isinstance(node, ast.Dict):
|
||||
return {"type": "dict", "value": self._parse_dict(node)}
|
||||
if isinstance(node, ast.List):
|
||||
return {"type": "list", "value": self._parse_list(node)}
|
||||
return {"type": "raw", "value": ast.unparse(node) if hasattr(ast, "unparse") else str(node)}
|
||||
|
||||
args: Dict[str, Any] = {}
|
||||
pos: List[Any] = []
|
||||
for a in call.args:
|
||||
pos.append(pack(a))
|
||||
for kw in call.keywords:
|
||||
args[kw.arg] = pack(kw.value)
|
||||
if pos:
|
||||
args["_positional"] = pos
|
||||
return owner_name, method_name, args, call_kind
|
||||
|
||||
def _parse_dict(self, node) -> Dict[str, Any]:
|
||||
import ast
|
||||
out: Dict[str, Any] = {}
|
||||
for k, v in zip(node.keys, node.values):
|
||||
if isinstance(k, ast.Constant):
|
||||
key = str(k.value)
|
||||
if isinstance(v, ast.Name):
|
||||
out[key] = f"var:{v.id}"
|
||||
elif isinstance(v, ast.Constant):
|
||||
out[key] = v.value
|
||||
elif isinstance(v, ast.Dict):
|
||||
out[key] = self._parse_dict(v)
|
||||
elif isinstance(v, ast.List):
|
||||
out[key] = self._parse_list(v)
|
||||
return out
|
||||
|
||||
def _parse_list(self, node) -> List[Any]:
|
||||
import ast
|
||||
out: List[Any] = []
|
||||
for elt in node.elts:
|
||||
if isinstance(elt, ast.Name):
|
||||
out.append(f"var:{elt.id}")
|
||||
elif isinstance(elt, ast.Constant):
|
||||
out.append(elt.value)
|
||||
elif isinstance(elt, ast.Dict):
|
||||
out.append(self._parse_dict(elt))
|
||||
elif isinstance(elt, ast.List):
|
||||
out.append(self._parse_list(elt))
|
||||
return out
|
||||
|
||||
def _normalize_var_tokens(self, x: Any) -> Any:
|
||||
if isinstance(x, str) and x.startswith("var:"):
|
||||
return {"__var__": x[4:]}
|
||||
if isinstance(x, list):
|
||||
return [self._normalize_var_tokens(i) for i in x]
|
||||
if isinstance(x, dict):
|
||||
return {k: self._normalize_var_tokens(v) for k, v in x.items()}
|
||||
return x
|
||||
|
||||
def _make_params_payload(self, resource_name: Optional[str], template_name: str, call_args: Dict[str, Any]) -> Dict[str, Any]:
|
||||
input_keys = self.registry.get_action_input_keys(resource_name, template_name) if resource_name else []
|
||||
defaults = self.registry.get_action_goal_default(resource_name, template_name) if resource_name else {}
|
||||
params: Dict[str, Any] = dict(defaults)
|
||||
|
||||
def unpack(p):
|
||||
t, v = p.get("type"), p.get("value")
|
||||
if t == "variable":
|
||||
return {"__var__": v}
|
||||
if t == "dict":
|
||||
return self._normalize_var_tokens(v)
|
||||
if t == "list":
|
||||
return self._normalize_var_tokens(v)
|
||||
return v
|
||||
|
||||
for k, p in call_args.items():
|
||||
if k == "_positional":
|
||||
continue
|
||||
params[k] = unpack(p)
|
||||
|
||||
pos = call_args.get("_positional", [])
|
||||
if pos:
|
||||
if input_keys:
|
||||
for i, p in enumerate(pos):
|
||||
if i >= len(input_keys):
|
||||
break
|
||||
name = input_keys[i]
|
||||
if name in params:
|
||||
continue
|
||||
params[name] = unpack(p)
|
||||
else:
|
||||
for i, p in enumerate(pos):
|
||||
params[f"arg_{i}"] = unpack(p)
|
||||
return params
|
||||
|
||||
# ---- handlers ----
|
||||
def _on_assign(self, stmt):
|
||||
import ast
|
||||
inst = self._extract_device_instantiation(stmt)
|
||||
if inst:
|
||||
instance, code_class = inst
|
||||
resource_name = self.registry.resolve_resource_by_classname(code_class)
|
||||
self.instance_to_resource[instance] = resource_name
|
||||
return
|
||||
|
||||
if isinstance(stmt.value, ast.Call):
|
||||
owner, method, call_args, kind = self._extract_call(stmt.value)
|
||||
if kind == "instance":
|
||||
device_key = owner
|
||||
resource_name = self.instance_to_resource.get(owner)
|
||||
else:
|
||||
device_key = owner
|
||||
resource_name = self.registry.resolve_resource_by_classname(owner)
|
||||
|
||||
module = self.registry.get_device_module(resource_name)
|
||||
params = self._make_params_payload(resource_name, method, call_args)
|
||||
|
||||
nid = self._new_node_id()
|
||||
self.graph.add_workflow_node(
|
||||
nid,
|
||||
device_key=device_key,
|
||||
resource_name=resource_name, # ✅
|
||||
module=module,
|
||||
template_name=method, # ✅
|
||||
params=params,
|
||||
variable_sources=self.variable_sources,
|
||||
add_ready_if_no_vars=True,
|
||||
prev_node_id=(nid - 1) if nid > 0 else None,
|
||||
)
|
||||
|
||||
out_vars = self._assign_targets(stmt.targets[0])
|
||||
for var in out_vars:
|
||||
self.variable_sources[var] = {"node_id": nid, "output_name": "result"}
|
||||
|
||||
def _on_expr(self, stmt):
|
||||
import ast
|
||||
if not isinstance(stmt.value, ast.Call):
|
||||
return
|
||||
owner, method, call_args, kind = self._extract_call(stmt.value)
|
||||
if kind == "instance":
|
||||
device_key = owner
|
||||
resource_name = self.instance_to_resource.get(owner)
|
||||
else:
|
||||
device_key = owner
|
||||
resource_name = self.registry.resolve_resource_by_classname(owner)
|
||||
|
||||
module = self.registry.get_device_module(resource_name)
|
||||
params = self._make_params_payload(resource_name, method, call_args)
|
||||
|
||||
nid = self._new_node_id()
|
||||
self.graph.add_workflow_node(
|
||||
nid,
|
||||
device_key=device_key,
|
||||
resource_name=resource_name, # ✅
|
||||
module=module,
|
||||
template_name=method, # ✅
|
||||
params=params,
|
||||
variable_sources=self.variable_sources,
|
||||
add_ready_if_no_vars=True,
|
||||
prev_node_id=(nid - 1) if nid > 0 else None,
|
||||
)
|
||||
|
||||
def convert(self, python_code: str):
|
||||
tree = ast.parse(python_code)
|
||||
for stmt in tree.body:
|
||||
if isinstance(stmt, ast.Assign):
|
||||
self._on_assign(stmt)
|
||||
elif isinstance(stmt, ast.Expr):
|
||||
self._on_expr(stmt)
|
||||
return self
|
||||
@@ -1,131 +0,0 @@
|
||||
from typing import List, Any, Dict
|
||||
import xml.etree.ElementTree as ET
|
||||
|
||||
|
||||
def convert_to_type(val: str) -> Any:
|
||||
"""将字符串值转换为适当的数据类型"""
|
||||
if val == "True":
|
||||
return True
|
||||
if val == "False":
|
||||
return False
|
||||
if val == "?":
|
||||
return None
|
||||
if val.endswith(" g"):
|
||||
return float(val.split(" ")[0])
|
||||
if val.endswith("mg"):
|
||||
return float(val.split("mg")[0])
|
||||
elif val.endswith("mmol"):
|
||||
return float(val.split("mmol")[0]) / 1000
|
||||
elif val.endswith("mol"):
|
||||
return float(val.split("mol")[0])
|
||||
elif val.endswith("ml"):
|
||||
return float(val.split("ml")[0])
|
||||
elif val.endswith("RPM"):
|
||||
return float(val.split("RPM")[0])
|
||||
elif val.endswith(" °C"):
|
||||
return float(val.split(" ")[0])
|
||||
elif val.endswith(" %"):
|
||||
return float(val.split(" ")[0])
|
||||
return val
|
||||
|
||||
|
||||
def flatten_xdl_procedure(procedure_elem: ET.Element) -> List[ET.Element]:
|
||||
"""展平嵌套的XDL程序结构"""
|
||||
flattened_operations = []
|
||||
TEMP_UNSUPPORTED_PROTOCOL = ["Purge", "Wait", "Stir", "ResetHandling"]
|
||||
|
||||
def extract_operations(element: ET.Element):
|
||||
if element.tag not in ["Prep", "Reaction", "Workup", "Purification", "Procedure"]:
|
||||
if element.tag not in TEMP_UNSUPPORTED_PROTOCOL:
|
||||
flattened_operations.append(element)
|
||||
|
||||
for child in element:
|
||||
extract_operations(child)
|
||||
|
||||
for child in procedure_elem:
|
||||
extract_operations(child)
|
||||
|
||||
return flattened_operations
|
||||
|
||||
|
||||
def parse_xdl_content(xdl_content: str) -> tuple:
|
||||
"""解析XDL内容"""
|
||||
try:
|
||||
xdl_content_cleaned = "".join(c for c in xdl_content if c.isprintable())
|
||||
root = ET.fromstring(xdl_content_cleaned)
|
||||
|
||||
synthesis_elem = root.find("Synthesis")
|
||||
if synthesis_elem is None:
|
||||
return None, None, None
|
||||
|
||||
# 解析硬件组件
|
||||
hardware_elem = synthesis_elem.find("Hardware")
|
||||
hardware = []
|
||||
if hardware_elem is not None:
|
||||
hardware = [{"id": c.get("id"), "type": c.get("type")} for c in hardware_elem.findall("Component")]
|
||||
|
||||
# 解析试剂
|
||||
reagents_elem = synthesis_elem.find("Reagents")
|
||||
reagents = []
|
||||
if reagents_elem is not None:
|
||||
reagents = [{"name": r.get("name"), "role": r.get("role", "")} for r in reagents_elem.findall("Reagent")]
|
||||
|
||||
# 解析程序
|
||||
procedure_elem = synthesis_elem.find("Procedure")
|
||||
if procedure_elem is None:
|
||||
return None, None, None
|
||||
|
||||
flattened_operations = flatten_xdl_procedure(procedure_elem)
|
||||
return hardware, reagents, flattened_operations
|
||||
|
||||
except ET.ParseError as e:
|
||||
raise ValueError(f"Invalid XDL format: {e}")
|
||||
|
||||
|
||||
def convert_xdl_to_dict(xdl_content: str) -> Dict[str, Any]:
|
||||
"""
|
||||
将XDL XML格式转换为标准的字典格式
|
||||
|
||||
Args:
|
||||
xdl_content: XDL XML内容
|
||||
|
||||
Returns:
|
||||
转换结果,包含步骤和器材信息
|
||||
"""
|
||||
try:
|
||||
hardware, reagents, flattened_operations = parse_xdl_content(xdl_content)
|
||||
if hardware is None:
|
||||
return {"error": "Failed to parse XDL content", "success": False}
|
||||
|
||||
# 将XDL元素转换为字典格式
|
||||
steps_data = []
|
||||
for elem in flattened_operations:
|
||||
# 转换参数类型
|
||||
parameters = {}
|
||||
for key, val in elem.attrib.items():
|
||||
converted_val = convert_to_type(val)
|
||||
if converted_val is not None:
|
||||
parameters[key] = converted_val
|
||||
|
||||
step_dict = {
|
||||
"operation": elem.tag,
|
||||
"parameters": parameters,
|
||||
"description": elem.get("purpose", f"Operation: {elem.tag}"),
|
||||
}
|
||||
steps_data.append(step_dict)
|
||||
|
||||
# 合并硬件和试剂为统一的labware_info格式
|
||||
labware_data = []
|
||||
labware_data.extend({"id": hw["id"], "type": "hardware", **hw} for hw in hardware)
|
||||
labware_data.extend({"name": reagent["name"], "type": "reagent", **reagent} for reagent in reagents)
|
||||
|
||||
return {
|
||||
"success": True,
|
||||
"steps": steps_data,
|
||||
"labware": labware_data,
|
||||
"message": f"Successfully converted XDL to dict format. Found {len(steps_data)} steps and {len(labware_data)} labware items.",
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
error_msg = f"XDL conversion failed: {str(e)}"
|
||||
return {"error": error_msg, "success": False}
|
||||
@@ -2,7 +2,7 @@
|
||||
<?xml-model href="http://download.ros.org/schema/package_format3.xsd" schematypens="http://www.w3.org/2001/XMLSchema"?>
|
||||
<package format="3">
|
||||
<name>unilabos_msgs</name>
|
||||
<version>0.10.12</version>
|
||||
<version>0.10.11</version>
|
||||
<description>ROS2 Messages package for unilabos devices</description>
|
||||
<maintainer email="changjh@pku.edu.cn">Junhan Chang</maintainer>
|
||||
<maintainer email="18435084+Xuwznln@users.noreply.github.com">Xuwznln</maintainer>
|
||||
|
||||
Reference in New Issue
Block a user