25 Commits

Author SHA1 Message Date
xiaoyu10031
7f4a2d72f6 add init.py file to cameraSII driver 2025-12-16 14:29:52 +08:00
xiaoyu10031
e1fda6f4ed add camera driver 2025-12-16 00:04:52 +08:00
Xuwznln
152d3a7563 Update docs 2025-12-14 13:12:19 +08:00
Xuwznln
ef14737839 update "laiyu" missing init file. 2025-12-14 13:08:27 +08:00
Xuwznln
5d5569121c fix "laiyu" missing init file. 2025-12-14 12:55:25 +08:00
Xuwznln
d23e85ade4 fix "🐛 fix" 2025-12-14 01:17:24 +08:00
Haohui
02afafd423 🐛 fix: config file is overwrited by default args even if not be set. 2025-12-12 23:55:38 +08:00
Xianwei Qi
6ac510dcd2 mix
修改了mix,仿真流程报错问题
2025-12-11 23:26:11 +08:00
Xuwznln
ed56c1eba2 reduce logs 2025-12-08 19:23:53 +08:00
Xuwznln
16ee3de086 Add workflow upload func. 2025-12-08 19:12:05 +08:00
Junhan Chang
ced961050d add unilabos/workflow and entrypoint 2025-12-07 17:50:27 +08:00
Xuwznln
11b2c99836 update version to 0.10.12
(cherry picked from commit b1cdef9185)
2025-12-04 18:47:44 +08:00
Xuwznln
04024bc8a3 fix ros2 future 2025-12-04 18:44:50 +08:00
Xuwznln
154048107d print all logs to file
fix resource dict dump error
2025-12-04 16:04:56 +08:00
Xuwznln
0b896870ba signal when host node is ready 2025-12-02 12:00:41 +08:00
Xuwznln
ee609e4aa2 Fix startup with remote resource error 2025-12-02 11:49:59 +08:00
Xuwznln
5551fbf360 Resource dict fully change to "pose" key 2025-12-02 03:45:16 +08:00
Xuwznln
e13b250632 Update oss link 2025-12-01 12:23:07 +08:00
Xuwznln
b8278c5026 Reduce pylabrobot conversion warning & force enable log dump. 2025-11-28 22:41:50 +08:00
ZiWei
53e767a054 更新 logo 图片 2025-11-28 11:35:05 +08:00
Xuwznln
cf7032fa81 Auto dump logs, fix workstation input schema 2025-11-27 14:24:50 +08:00
Xuwznln
97681ba433 Add get_regular_container func 2025-11-27 13:47:47 +08:00
Xuwznln
3fa81ab4f6 Add get_regular_container func
(cherry picked from commit ed8ee29732)
2025-11-27 13:47:46 +08:00
Harry Liu
9f4a69ddf5 Transfer_liquid (#176)
* change 9320 desk row number to 4

* Updated 9320 host address

* Updated 9320 host address

* Add **kwargs in classes: PRCXI9300Deck and PRCXI9300Container

* Removed all sample_id in prcxi_9320.json to avoid KeyError

* 9320 machine testing settings

* Typo

* Typo in base_device_node.py

* Enhance liquid handling functionality by adding support for multiple transfer modes (one-to-many, one-to-one, many-to-one) and improving parameter validation. Default channel usage is set when not specified. Adjusted mixing logic to ensure it only occurs when valid conditions are met. Updated documentation for clarity.
2025-11-26 19:30:42 +08:00
Xuwznln
05ae4e72df Add backend api and update doc 2025-11-26 19:03:31 +08:00
82 changed files with 4956 additions and 1013 deletions

View File

@@ -1,6 +1,6 @@
package:
name: unilabos
version: 0.10.11
version: 0.10.12
source:
path: ../unilabos

View File

@@ -39,7 +39,9 @@ Uni-Lab-OS recommends using `mamba` for environment management. Choose the appro
```bash
# Create new environment
mamba create -n unilab uni-lab::unilabos -c robostack-staging -c conda-forge
mamba create -n unilab python=3.11.11
mamba activate unilab
mamba install -n unilab uni-lab::unilabos -c robostack-staging -c conda-forge
```
## Install Dev Uni-Lab-OS

View File

@@ -41,7 +41,9 @@ Uni-Lab-OS 建议使用 `mamba` 管理环境。根据您的操作系统选择适
```bash
# 创建新环境
mamba create -n unilab uni-lab::unilabos -c robostack-staging -c conda-forge
mamba create -n unilab python=3.11.11
mamba activate unilab
mamba install -n unilab uni-lab::unilabos -c robostack-staging -c conda-forge
```
2. 安装开发版 Uni-Lab-OS:

View File

@@ -0,0 +1,334 @@
# HTTP API 指南
本文档介绍如何通过 HTTP API 与 Uni-Lab-OS 进行交互,包括查询设备、提交任务和获取结果。
## 概述
Uni-Lab-OS 提供 RESTful HTTP API允许外部系统通过标准 HTTP 请求控制实验室设备。API 基于 FastAPI 构建,默认运行在 `http://localhost:8002`
### 基础信息
- **Base URL**: `http://localhost:8002/api/v1`
- **Content-Type**: `application/json`
- **响应格式**: JSON
### 通用响应结构
```json
{
"code": 0,
"data": { ... },
"message": "success"
}
```
| 字段 | 类型 | 说明 |
| --------- | ------ | ------------------ |
| `code` | int | 状态码0 表示成功 |
| `data` | object | 响应数据 |
| `message` | string | 响应消息 |
## 快速开始
以下是一个完整的工作流示例:查询设备 → 获取动作 → 提交任务 → 获取结果。
### 步骤 1: 获取在线设备
```bash
curl -X GET "http://localhost:8002/api/v1/online-devices"
```
**响应示例**:
```json
{
"code": 0,
"data": {
"online_devices": {
"host_node": {
"device_key": "/host_node",
"namespace": "",
"machine_name": "本地",
"uuid": "xxx-xxx-xxx",
"node_name": "host_node"
}
},
"total_count": 1,
"timestamp": 1732612345.123
},
"message": "success"
}
```
### 步骤 2: 获取设备可用动作
```bash
curl -X GET "http://localhost:8002/api/v1/devices/host_node/actions"
```
**响应示例**:
```json
{
"code": 0,
"data": {
"device_id": "host_node",
"actions": {
"test_latency": {
"type_name": "unilabos_msgs.action._empty_in.EmptyIn",
"type_name_convert": "unilabos_msgs/action/_empty_in/EmptyIn",
"action_path": "/devices/host_node/test_latency",
"goal_info": "{}",
"is_busy": false,
"current_job_id": null
},
"create_resource": {
"type_name": "unilabos_msgs.action._resource_create_from_outer_easy.ResourceCreateFromOuterEasy",
"action_path": "/devices/host_node/create_resource",
"goal_info": "{res_id: '', device_id: '', class_name: '', ...}",
"is_busy": false,
"current_job_id": null
}
},
"action_count": 5
},
"message": "success"
}
```
**动作状态字段说明**:
| 字段 | 说明 |
| ---------------- | ----------------------------- |
| `type_name` | 动作类型的完整名称 |
| `action_path` | ROS2 动作路径 |
| `goal_info` | 动作参数模板 |
| `is_busy` | 动作是否正在执行 |
| `current_job_id` | 当前执行的任务 ID如果繁忙 |
### 步骤 3: 提交任务
```bash
curl -X POST "http://localhost:8002/api/v1/job/add" \
-H "Content-Type: application/json" \
-d '{"device_id":"host_node","action":"test_latency","action_args":{}}'
```
**请求体**:
```json
{
"device_id": "host_node",
"action": "test_latency",
"action_args": {}
}
```
**请求参数说明**:
| 字段 | 类型 | 必填 | 说明 |
| ------------- | ------ | ---- | ---------------------------------- |
| `device_id` | string | ✓ | 目标设备 ID |
| `action` | string | ✓ | 动作名称 |
| `action_args` | object | ✓ | 动作参数(根据动作类型不同而变化) |
**响应示例**:
```json
{
"code": 0,
"data": {
"jobId": "b6acb586-733a-42ab-9f73-55c9a52aa8bd",
"status": 1,
"result": {}
},
"message": "success"
}
```
**任务状态码**:
| 状态码 | 含义 | 说明 |
| ------ | --------- | ------------------------------ |
| 0 | UNKNOWN | 未知状态 |
| 1 | ACCEPTED | 任务已接受,等待执行 |
| 2 | EXECUTING | 任务执行中 |
| 3 | CANCELING | 任务取消中 |
| 4 | SUCCEEDED | 任务成功完成 |
| 5 | CANCELED | 任务已取消 |
| 6 | ABORTED | 任务中止(设备繁忙或执行失败) |
### 步骤 4: 查询任务状态和结果
```bash
curl -X GET "http://localhost:8002/api/v1/job/b6acb586-733a-42ab-9f73-55c9a52aa8bd/status"
```
**响应示例(执行中)**:
```json
{
"code": 0,
"data": {
"jobId": "b6acb586-733a-42ab-9f73-55c9a52aa8bd",
"status": 2,
"result": {}
},
"message": "success"
}
```
**响应示例(执行完成)**:
```json
{
"code": 0,
"data": {
"jobId": "b6acb586-733a-42ab-9f73-55c9a52aa8bd",
"status": 4,
"result": {
"error": "",
"suc": true,
"return_value": {
"avg_rtt_ms": 103.99,
"avg_time_diff_ms": 7181.55,
"max_time_error_ms": 7210.57,
"task_delay_ms": -1,
"raw_delay_ms": 33.19,
"test_count": 5,
"status": "success"
}
}
},
"message": "success"
}
```
> **注意**: 任务结果在首次查询后会被自动删除,请确保保存返回的结果数据。
## API 端点列表
### 设备相关
| 端点 | 方法 | 说明 |
| ---------------------------------------------------------- | ---- | ---------------------- |
| `/api/v1/online-devices` | GET | 获取在线设备列表 |
| `/api/v1/devices` | GET | 获取设备配置 |
| `/api/v1/devices/{device_id}/actions` | GET | 获取指定设备的可用动作 |
| `/api/v1/devices/{device_id}/actions/{action_name}/schema` | GET | 获取动作参数 Schema |
| `/api/v1/actions` | GET | 获取所有设备的可用动作 |
### 任务相关
| 端点 | 方法 | 说明 |
| ----------------------------- | ---- | ------------------ |
| `/api/v1/job/add` | POST | 提交新任务 |
| `/api/v1/job/{job_id}/status` | GET | 查询任务状态和结果 |
### 资源相关
| 端点 | 方法 | 说明 |
| ------------------- | ---- | ------------ |
| `/api/v1/resources` | GET | 获取资源列表 |
## 常见动作示例
### test_latency - 延迟测试
测试系统延迟,无需参数。
```bash
curl -X POST "http://localhost:8002/api/v1/job/add" \
-H "Content-Type: application/json" \
-d '{"device_id":"host_node","action":"test_latency","action_args":{}}'
```
### create_resource - 创建资源
在设备上创建新资源。
```bash
curl -X POST "http://localhost:8002/api/v1/job/add" \
-H "Content-Type: application/json" \
-d '{
"device_id": "host_node",
"action": "create_resource",
"action_args": {
"res_id": "my_plate",
"device_id": "host_node",
"class_name": "Plate",
"parent": "deck",
"bind_locations": {"x": 0, "y": 0, "z": 0}
}
}'
```
## 错误处理
### 设备繁忙
当设备正在执行其他任务时,提交新任务会返回 `status: 6`ABORTED
```json
{
"code": 0,
"data": {
"jobId": "xxx",
"status": 6,
"result": {}
},
"message": "success"
}
```
此时应等待当前任务完成后重试,或使用 `/devices/{device_id}/actions` 检查动作的 `is_busy` 状态。
### 参数错误
```json
{
"code": 2002,
"data": { ... },
"message": "device_id is required"
}
```
## 轮询策略
推荐的任务状态轮询策略:
```python
import requests
import time
def wait_for_job(job_id, timeout=60, interval=0.5):
"""等待任务完成并返回结果"""
start_time = time.time()
while time.time() - start_time < timeout:
response = requests.get(f"http://localhost:8002/api/v1/job/{job_id}/status")
data = response.json()["data"]
status = data["status"]
if status in (4, 5, 6): # SUCCEEDED, CANCELED, ABORTED
return data
time.sleep(interval)
raise TimeoutError(f"Job {job_id} did not complete within {timeout} seconds")
# 使用示例
response = requests.post(
"http://localhost:8002/api/v1/job/add",
json={"device_id": "host_node", "action": "test_latency", "action_args": {}}
)
job_id = response.json()["data"]["jobId"]
result = wait_for_job(job_id)
print(result)
```
## 相关文档
- [设备注册指南](add_device.md)
- [动作定义指南](add_action.md)
- [网络架构概述](networking_overview.md)

View File

@@ -7,3 +7,17 @@ Uni-Lab-OS 是一个开源的实验室自动化操作系统,提供统一的设
intro.md
```
## 开发者指南
```{toctree}
:maxdepth: 2
developer_guide/http_api.md
developer_guide/networking_overview.md
developer_guide/add_device.md
developer_guide/add_action.md
developer_guide/add_registry.md
developer_guide/add_yaml.md
developer_guide/action_includes.md
```

Binary file not shown.

Before

Width:  |  Height:  |  Size: 326 KiB

After

Width:  |  Height:  |  Size: 262 KiB

View File

@@ -317,45 +317,6 @@ unilab --help
如果所有命令都正常输出,说明开发环境配置成功!
### 开发工具推荐
#### IDE
- **PyCharm Professional**: 强大的 Python IDE支持远程调试
- **VS Code**: 轻量级,配合 Python 扩展使用
- **Vim/Emacs**: 适合终端开发
#### 推荐的 VS Code 扩展
- Python
- Pylance
- ROS
- URDF
- YAML
#### 调试工具
```bash
# 安装调试工具
pip install ipdb pytest pytest-cov -i https://mirrors.tuna.tsinghua.edu.cn/pypi/web/simple
# 代码质量检查
pip install black flake8 mypy -i https://mirrors.tuna.tsinghua.edu.cn/pypi/web/simple
```
### 设置 pre-commit 钩子(可选)
```bash
# 安装 pre-commit
pip install pre-commit -i https://mirrors.tuna.tsinghua.edu.cn/pypi/web/simple
# 设置钩子
pre-commit install
# 手动运行检查
pre-commit run --all-files
```
---
## 验证安装

View File

@@ -1,6 +1,6 @@
package:
name: ros-humble-unilabos-msgs
version: 0.10.11
version: 0.10.12
source:
path: ../../unilabos_msgs
target_directory: src

View File

@@ -1,6 +1,6 @@
package:
name: unilabos
version: "0.10.11"
version: "0.10.12"
source:
path: ../..

View File

@@ -2,7 +2,6 @@ import json
import logging
import traceback
import uuid
import xml.etree.ElementTree as ET
from typing import Any, Dict, List
import networkx as nx
@@ -25,7 +24,15 @@ class SimpleGraph:
def add_edge(self, source, target, **attrs):
"""添加边"""
edge = {"source": source, "target": target, **attrs}
# edge = {"source": source, "target": target, **attrs}
edge = {
"source": source, "target": target,
"source_node_uuid": source,
"target_node_uuid": target,
"source_handle_io": "source",
"target_handle_io": "target",
**attrs
}
self.edges.append(edge)
def to_dict(self):
@@ -42,6 +49,7 @@ class SimpleGraph:
"multigraph": False,
"graph": {},
"nodes": nodes_list,
"edges": self.edges,
"links": self.edges,
}
@@ -58,495 +66,8 @@ def extract_json_from_markdown(text: str) -> str:
return text
def convert_to_type(val: str) -> Any:
"""将字符串值转换为适当的数据类型"""
if val == "True":
return True
if val == "False":
return False
if val == "?":
return None
if val.endswith(" g"):
return float(val.split(" ")[0])
if val.endswith("mg"):
return float(val.split("mg")[0])
elif val.endswith("mmol"):
return float(val.split("mmol")[0]) / 1000
elif val.endswith("mol"):
return float(val.split("mol")[0])
elif val.endswith("ml"):
return float(val.split("ml")[0])
elif val.endswith("RPM"):
return float(val.split("RPM")[0])
elif val.endswith(" °C"):
return float(val.split(" ")[0])
elif val.endswith(" %"):
return float(val.split(" ")[0])
return val
def refactor_data(data: List[Dict[str, Any]]) -> List[Dict[str, Any]]:
"""统一的数据重构函数,根据操作类型自动选择模板"""
refactored_data = []
# 定义操作映射,包含生物实验和有机化学的所有操作
OPERATION_MAPPING = {
# 生物实验操作
"transfer_liquid": "SynBioFactory-liquid_handler.prcxi-transfer_liquid",
"transfer": "SynBioFactory-liquid_handler.biomek-transfer",
"incubation": "SynBioFactory-liquid_handler.biomek-incubation",
"move_labware": "SynBioFactory-liquid_handler.biomek-move_labware",
"oscillation": "SynBioFactory-liquid_handler.biomek-oscillation",
# 有机化学操作
"HeatChillToTemp": "SynBioFactory-workstation-HeatChillProtocol",
"StopHeatChill": "SynBioFactory-workstation-HeatChillStopProtocol",
"StartHeatChill": "SynBioFactory-workstation-HeatChillStartProtocol",
"HeatChill": "SynBioFactory-workstation-HeatChillProtocol",
"Dissolve": "SynBioFactory-workstation-DissolveProtocol",
"Transfer": "SynBioFactory-workstation-TransferProtocol",
"Evaporate": "SynBioFactory-workstation-EvaporateProtocol",
"Recrystallize": "SynBioFactory-workstation-RecrystallizeProtocol",
"Filter": "SynBioFactory-workstation-FilterProtocol",
"Dry": "SynBioFactory-workstation-DryProtocol",
"Add": "SynBioFactory-workstation-AddProtocol",
}
UNSUPPORTED_OPERATIONS = ["Purge", "Wait", "Stir", "ResetHandling"]
for step in data:
operation = step.get("action")
if not operation or operation in UNSUPPORTED_OPERATIONS:
continue
# 处理重复操作
if operation == "Repeat":
times = step.get("times", step.get("parameters", {}).get("times", 1))
sub_steps = step.get("steps", step.get("parameters", {}).get("steps", []))
for i in range(int(times)):
sub_data = refactor_data(sub_steps)
refactored_data.extend(sub_data)
continue
# 获取模板名称
template = OPERATION_MAPPING.get(operation)
if not template:
# 自动推断模板类型
if operation.lower() in ["transfer", "incubation", "move_labware", "oscillation"]:
template = f"SynBioFactory-liquid_handler.biomek-{operation}"
else:
template = f"SynBioFactory-workstation-{operation}Protocol"
# 创建步骤数据
step_data = {
"template": template,
"description": step.get("description", step.get("purpose", f"{operation} operation")),
"lab_node_type": "Device",
"parameters": step.get("parameters", step.get("action_args", {})),
}
refactored_data.append(step_data)
return refactored_data
def build_protocol_graph(
labware_info: List[Dict[str, Any]], protocol_steps: List[Dict[str, Any]], workstation_name: str
) -> SimpleGraph:
"""统一的协议图构建函数,根据设备类型自动选择构建逻辑"""
G = SimpleGraph()
resource_last_writer = {}
LAB_NAME = "SynBioFactory"
protocol_steps = refactor_data(protocol_steps)
# 检查协议步骤中的模板来判断协议类型
has_biomek_template = any(
("biomek" in step.get("template", "")) or ("prcxi" in step.get("template", ""))
for step in protocol_steps
)
if has_biomek_template:
# 生物实验协议图构建
for labware_id, labware in labware_info.items():
node_id = str(uuid.uuid4())
labware_attrs = labware.copy()
labware_id = labware_attrs.pop("id", labware_attrs.get("name", f"labware_{uuid.uuid4()}"))
labware_attrs["description"] = labware_id
labware_attrs["lab_node_type"] = (
"Reagent" if "Plate" in str(labware_id) else "Labware" if "Rack" in str(labware_id) else "Sample"
)
labware_attrs["device_id"] = workstation_name
G.add_node(node_id, template=f"{LAB_NAME}-host_node-create_resource", **labware_attrs)
resource_last_writer[labware_id] = f"{node_id}:labware"
# 处理协议步骤
prev_node = None
for i, step in enumerate(protocol_steps):
node_id = str(uuid.uuid4())
G.add_node(node_id, **step)
# 添加控制流边
if prev_node is not None:
G.add_edge(prev_node, node_id, source_port="ready", target_port="ready")
prev_node = node_id
# 处理物料流
params = step.get("parameters", {})
if "sources" in params and params["sources"] in resource_last_writer:
source_node, source_port = resource_last_writer[params["sources"]].split(":")
G.add_edge(source_node, node_id, source_port=source_port, target_port="labware")
if "targets" in params:
resource_last_writer[params["targets"]] = f"{node_id}:labware"
# 添加协议结束节点
end_id = str(uuid.uuid4())
G.add_node(end_id, template=f"{LAB_NAME}-liquid_handler.biomek-run_protocol")
if prev_node is not None:
G.add_edge(prev_node, end_id, source_port="ready", target_port="ready")
else:
# 有机化学协议图构建
WORKSTATION_ID = workstation_name
# 为所有labware创建资源节点
for item_id, item in labware_info.items():
# item_id = item.get("id") or item.get("name", f"item_{uuid.uuid4()}")
node_id = str(uuid.uuid4())
# 判断节点类型
if item.get("type") == "hardware" or "reactor" in str(item_id).lower():
if "reactor" not in str(item_id).lower():
continue
lab_node_type = "Sample"
description = f"Prepare Reactor: {item_id}"
liquid_type = []
liquid_volume = []
else:
lab_node_type = "Reagent"
description = f"Add Reagent to Flask: {item_id}"
liquid_type = [item_id]
liquid_volume = [1e5]
G.add_node(
node_id,
template=f"{LAB_NAME}-host_node-create_resource",
description=description,
lab_node_type=lab_node_type,
res_id=item_id,
device_id=WORKSTATION_ID,
class_name="container",
parent=WORKSTATION_ID,
bind_locations={"x": 0.0, "y": 0.0, "z": 0.0},
liquid_input_slot=[-1],
liquid_type=liquid_type,
liquid_volume=liquid_volume,
slot_on_deck="",
role=item.get("role", ""),
)
resource_last_writer[item_id] = f"{node_id}:labware"
last_control_node_id = None
# 处理协议步骤
for step in protocol_steps:
node_id = str(uuid.uuid4())
G.add_node(node_id, **step)
# 控制流
if last_control_node_id is not None:
G.add_edge(last_control_node_id, node_id, source_port="ready", target_port="ready")
last_control_node_id = node_id
# 物料流
params = step.get("parameters", {})
input_resources = {
"Vessel": params.get("vessel"),
"ToVessel": params.get("to_vessel"),
"FromVessel": params.get("from_vessel"),
"reagent": params.get("reagent"),
"solvent": params.get("solvent"),
"compound": params.get("compound"),
"sources": params.get("sources"),
"targets": params.get("targets"),
}
for target_port, resource_name in input_resources.items():
if resource_name and resource_name in resource_last_writer:
source_node, source_port = resource_last_writer[resource_name].split(":")
G.add_edge(source_node, node_id, source_port=source_port, target_port=target_port)
output_resources = {
"VesselOut": params.get("vessel"),
"FromVesselOut": params.get("from_vessel"),
"ToVesselOut": params.get("to_vessel"),
"FiltrateOut": params.get("filtrate_vessel"),
"reagent": params.get("reagent"),
"solvent": params.get("solvent"),
"compound": params.get("compound"),
"sources_out": params.get("sources"),
"targets_out": params.get("targets"),
}
for source_port, resource_name in output_resources.items():
if resource_name:
resource_last_writer[resource_name] = f"{node_id}:{source_port}"
return G
def draw_protocol_graph(protocol_graph: SimpleGraph, output_path: str):
"""
(辅助功能) 使用 networkx 和 matplotlib 绘制协议工作流图,用于可视化。
"""
if not protocol_graph:
print("Cannot draw graph: Graph object is empty.")
return
G = nx.DiGraph()
for node_id, attrs in protocol_graph.nodes.items():
label = attrs.get("description", attrs.get("template", node_id[:8]))
G.add_node(node_id, label=label, **attrs)
for edge in protocol_graph.edges:
G.add_edge(edge["source"], edge["target"])
plt.figure(figsize=(20, 15))
try:
pos = nx.nx_agraph.graphviz_layout(G, prog="dot")
except Exception:
pos = nx.shell_layout(G) # Fallback layout
node_labels = {node: data["label"] for node, data in G.nodes(data=True)}
nx.draw(
G,
pos,
with_labels=False,
node_size=2500,
node_color="skyblue",
node_shape="o",
edge_color="gray",
width=1.5,
arrowsize=15,
)
nx.draw_networkx_labels(G, pos, labels=node_labels, font_size=8, font_weight="bold")
plt.title("Chemical Protocol Workflow Graph", size=15)
plt.savefig(output_path, dpi=300, bbox_inches="tight")
plt.close()
print(f" - Visualization saved to '{output_path}'")
from networkx.drawing.nx_agraph import to_agraph
import re
COMPASS = {"n","e","s","w","ne","nw","se","sw","c"}
def _is_compass(port: str) -> bool:
return isinstance(port, str) and port.lower() in COMPASS
def draw_protocol_graph_with_ports(protocol_graph, output_path: str, rankdir: str = "LR"):
"""
使用 Graphviz 端口语法绘制协议工作流图。
- 若边上的 source_port/target_port 是 compassn/e/s/w/...),直接用 compass。
- 否则自动为节点创建 record 形状并定义命名端口 <portname>。
最终由 PyGraphviz 渲染并输出到 output_path后缀决定格式如 .png/.svg/.pdf
"""
if not protocol_graph:
print("Cannot draw graph: Graph object is empty.")
return
# 1) 先用 networkx 搭建有向图,保留端口属性
G = nx.DiGraph()
for node_id, attrs in protocol_graph.nodes.items():
label = attrs.get("description", attrs.get("template", node_id[:8]))
# 保留一个干净的“中心标签”,用于放在 record 的中间槽
G.add_node(node_id, _core_label=str(label), **{k:v for k,v in attrs.items() if k not in ("label",)})
edges_data = []
in_ports_by_node = {} # 收集命名输入端口
out_ports_by_node = {} # 收集命名输出端口
for edge in protocol_graph.edges:
u = edge["source"]
v = edge["target"]
sp = edge.get("source_port")
tp = edge.get("target_port")
# 记录到图里(保留原始端口信息)
G.add_edge(u, v, source_port=sp, target_port=tp)
edges_data.append((u, v, sp, tp))
# 如果不是 compass就按“命名端口”先归类等会儿给节点造 record
if sp and not _is_compass(sp):
out_ports_by_node.setdefault(u, set()).add(str(sp))
if tp and not _is_compass(tp):
in_ports_by_node.setdefault(v, set()).add(str(tp))
# 2) 转为 AGraph使用 Graphviz 渲染
A = to_agraph(G)
A.graph_attr.update(rankdir=rankdir, splines="true", concentrate="false", fontsize="10")
A.node_attr.update(shape="box", style="rounded,filled", fillcolor="lightyellow", color="#999999", fontname="Helvetica")
A.edge_attr.update(arrowsize="0.8", color="#666666")
# 3) 为需要命名端口的节点设置 record 形状与 label
# 左列 = 输入端口;中间 = 核心标签;右列 = 输出端口
for n in A.nodes():
node = A.get_node(n)
core = G.nodes[n].get("_core_label", n)
in_ports = sorted(in_ports_by_node.get(n, []))
out_ports = sorted(out_ports_by_node.get(n, []))
# 如果该节点涉及命名端口,则用 record否则保留原 box
if in_ports or out_ports:
def port_fields(ports):
if not ports:
return " " # 必须留一个空槽占位
# 每个端口一个小格子,<p> name
return "|".join(f"<{re.sub(r'[^A-Za-z0-9_:.|-]', '_', p)}> {p}" for p in ports)
left = port_fields(in_ports)
right = port_fields(out_ports)
# 三栏:左(入) | 中(节点名) | 右(出)
record_label = f"{{ {left} | {core} | {right} }}"
node.attr.update(shape="record", label=record_label)
else:
# 没有命名端口:普通盒子,显示核心标签
node.attr.update(label=str(core))
# 4) 给边设置 headport / tailport
# - 若端口为 compass直接用 compasse.g., headport="e"
# - 若端口为命名端口:使用在 record 中定义的 <port> 名(同名即可)
for (u, v, sp, tp) in edges_data:
e = A.get_edge(u, v)
# Graphviz 属性tail 是源head 是目标
if sp:
if _is_compass(sp):
e.attr["tailport"] = sp.lower()
else:
# 与 record label 中 <port> 名一致;特殊字符已在 label 中做了清洗
e.attr["tailport"] = re.sub(r'[^A-Za-z0-9_:.|-]', '_', str(sp))
if tp:
if _is_compass(tp):
e.attr["headport"] = tp.lower()
else:
e.attr["headport"] = re.sub(r'[^A-Za-z0-9_:.|-]', '_', str(tp))
# 可选:若想让边更贴边缘,可设置 constraint/spline 等
# e.attr["arrowhead"] = "vee"
# 5) 输出
A.draw(output_path, prog="dot")
print(f" - Port-aware workflow rendered to '{output_path}'")
def flatten_xdl_procedure(procedure_elem: ET.Element) -> List[ET.Element]:
"""展平嵌套的XDL程序结构"""
flattened_operations = []
TEMP_UNSUPPORTED_PROTOCOL = ["Purge", "Wait", "Stir", "ResetHandling"]
def extract_operations(element: ET.Element):
if element.tag not in ["Prep", "Reaction", "Workup", "Purification", "Procedure"]:
if element.tag not in TEMP_UNSUPPORTED_PROTOCOL:
flattened_operations.append(element)
for child in element:
extract_operations(child)
for child in procedure_elem:
extract_operations(child)
return flattened_operations
def parse_xdl_content(xdl_content: str) -> tuple:
"""解析XDL内容"""
try:
xdl_content_cleaned = "".join(c for c in xdl_content if c.isprintable())
root = ET.fromstring(xdl_content_cleaned)
synthesis_elem = root.find("Synthesis")
if synthesis_elem is None:
return None, None, None
# 解析硬件组件
hardware_elem = synthesis_elem.find("Hardware")
hardware = []
if hardware_elem is not None:
hardware = [{"id": c.get("id"), "type": c.get("type")} for c in hardware_elem.findall("Component")]
# 解析试剂
reagents_elem = synthesis_elem.find("Reagents")
reagents = []
if reagents_elem is not None:
reagents = [{"name": r.get("name"), "role": r.get("role", "")} for r in reagents_elem.findall("Reagent")]
# 解析程序
procedure_elem = synthesis_elem.find("Procedure")
if procedure_elem is None:
return None, None, None
flattened_operations = flatten_xdl_procedure(procedure_elem)
return hardware, reagents, flattened_operations
except ET.ParseError as e:
raise ValueError(f"Invalid XDL format: {e}")
def convert_xdl_to_dict(xdl_content: str) -> Dict[str, Any]:
"""
将XDL XML格式转换为标准的字典格式
Args:
xdl_content: XDL XML内容
Returns:
转换结果,包含步骤和器材信息
"""
try:
hardware, reagents, flattened_operations = parse_xdl_content(xdl_content)
if hardware is None:
return {"error": "Failed to parse XDL content", "success": False}
# 将XDL元素转换为字典格式
steps_data = []
for elem in flattened_operations:
# 转换参数类型
parameters = {}
for key, val in elem.attrib.items():
converted_val = convert_to_type(val)
if converted_val is not None:
parameters[key] = converted_val
step_dict = {
"operation": elem.tag,
"parameters": parameters,
"description": elem.get("purpose", f"Operation: {elem.tag}"),
}
steps_data.append(step_dict)
# 合并硬件和试剂为统一的labware_info格式
labware_data = []
labware_data.extend({"id": hw["id"], "type": "hardware", **hw} for hw in hardware)
labware_data.extend({"name": reagent["name"], "type": "reagent", **reagent} for reagent in reagents)
return {
"success": True,
"steps": steps_data,
"labware": labware_data,
"message": f"Successfully converted XDL to dict format. Found {len(steps_data)} steps and {len(labware_data)} labware items.",
}
except Exception as e:
error_msg = f"XDL conversion failed: {str(e)}"
logger.error(error_msg)
return {"error": error_msg, "success": False}
def create_workflow(

View File

@@ -4,7 +4,7 @@ package_name = 'unilabos'
setup(
name=package_name,
version='0.10.11',
version='0.10.12',
packages=find_packages(),
include_package_data=True,
install_requires=['setuptools'],

View File

Before

Width:  |  Height:  |  Size: 148 KiB

After

Width:  |  Height:  |  Size: 148 KiB

View File

Before

Width:  |  Height:  |  Size: 140 KiB

After

Width:  |  Height:  |  Size: 140 KiB

View File

Before

Width:  |  Height:  |  Size: 117 KiB

After

Width:  |  Height:  |  Size: 117 KiB

View File

@@ -0,0 +1,35 @@
import sys
from datetime import datetime
from pathlib import Path
ROOT_DIR = Path(__file__).resolve().parents[2]
if str(ROOT_DIR) not in sys.path:
sys.path.insert(0, str(ROOT_DIR))
import pytest
from unilabos.workflow.convert_from_json import (
convert_from_json,
normalize_steps as _normalize_steps,
normalize_labware as _normalize_labware,
)
from unilabos.workflow.common import draw_protocol_graph_with_ports
@pytest.mark.parametrize(
"protocol_name",
[
"example_bio",
# "bioyond_materials_liquidhandling_1",
"example_prcxi",
],
)
def test_build_protocol_graph(protocol_name):
data_path = Path(__file__).with_name(f"{protocol_name}.json")
graph = convert_from_json(data_path, workstation_name="PRCXi")
timestamp = datetime.now().strftime("%Y%m%d_%H%M")
output_path = data_path.with_name(f"{protocol_name}_graph_{timestamp}.png")
draw_protocol_graph_with_ports(graph, str(output_path))
print(graph)

View File

@@ -1 +1 @@
__version__ = "0.10.11"
__version__ = "0.10.12"

View File

@@ -141,7 +141,7 @@ class CommunicationClientFactory:
"""
if cls._client_cache is None:
cls._client_cache = cls.create_client(protocol)
logger.info(f"[CommunicationFactory] Created {type(cls._client_cache).__name__} client")
logger.trace(f"[CommunicationFactory] Created {type(cls._client_cache).__name__} client")
return cls._client_cache

View File

@@ -20,6 +20,7 @@ if unilabos_dir not in sys.path:
from unilabos.utils.banner_print import print_status, print_unilab_banner
from unilabos.config.config import load_config, BasicConfig, HTTPConfig
def load_config_from_file(config_path):
if config_path is None:
config_path = os.environ.get("UNILABOS_BASICCONFIG_CONFIG_PATH", None)
@@ -41,7 +42,7 @@ def convert_argv_dashes_to_underscores(args: argparse.ArgumentParser):
for i, arg in enumerate(sys.argv):
for option_string in option_strings:
if arg.startswith(option_string):
new_arg = arg[:2] + arg[2:len(option_string)].replace("-", "_") + arg[len(option_string):]
new_arg = arg[:2] + arg[2 : len(option_string)].replace("-", "_") + arg[len(option_string) :]
sys.argv[i] = new_arg
break
@@ -49,6 +50,8 @@ def convert_argv_dashes_to_underscores(args: argparse.ArgumentParser):
def parse_args():
"""解析命令行参数"""
parser = argparse.ArgumentParser(description="Start Uni-Lab Edge server.")
subparsers = parser.add_subparsers(title="Valid subcommands", dest="command")
parser.add_argument("-g", "--graph", help="Physical setup graph file path.")
parser.add_argument("-c", "--controllers", default=None, help="Controllers config file path.")
parser.add_argument(
@@ -153,21 +156,54 @@ def parse_args():
default=False,
help="Complete registry information",
)
# workflow upload subcommand
workflow_parser = subparsers.add_parser(
"workflow_upload",
aliases=["wf"],
help="Upload workflow from xdl/json/python files",
)
workflow_parser.add_argument(
"-f",
"--workflow_file",
type=str,
required=True,
help="Path to the workflow file (JSON format)",
)
workflow_parser.add_argument(
"-n",
"--workflow_name",
type=str,
default=None,
help="Workflow name, if not provided will use the name from file or filename",
)
workflow_parser.add_argument(
"--tags",
type=str,
nargs="*",
default=[],
help="Tags for the workflow (space-separated)",
)
workflow_parser.add_argument(
"--published",
action="store_true",
default=False,
help="Whether to publish the workflow (default: False)",
)
return parser
def main():
"""主函数"""
# 解析命令行参数
args = parse_args()
convert_argv_dashes_to_underscores(args)
args_dict = vars(args.parse_args())
parser = parse_args()
convert_argv_dashes_to_underscores(parser)
args = parser.parse_args()
args_dict = vars(args)
# 环境检查 - 检查并自动安装必需的包 (可选)
if not args_dict.get("skip_env_check", False):
from unilabos.utils.environment_check import check_environment
print_status("正在进行环境依赖检查...", "info")
if not check_environment(auto_install=True):
print_status("环境检查失败,程序退出", "error")
os._exit(1)
@@ -218,19 +254,20 @@ def main():
if hasattr(BasicConfig, "log_level"):
logger.info(f"Log level set to '{BasicConfig.log_level}' from config file.")
configure_logger(loglevel=BasicConfig.log_level)
configure_logger(loglevel=BasicConfig.log_level, working_dir=working_dir)
if args_dict["addr"] == "test":
if args.addr != parser.get_default("addr"):
if args.addr == "test":
print_status("使用测试环境地址", "info")
HTTPConfig.remote_addr = "https://uni-lab.test.bohrium.com/api/v1"
elif args_dict["addr"] == "uat":
elif args.addr == "uat":
print_status("使用uat环境地址", "info")
HTTPConfig.remote_addr = "https://uni-lab.uat.bohrium.com/api/v1"
elif args_dict["addr"] == "local":
elif args.addr == "local":
print_status("使用本地环境地址", "info")
HTTPConfig.remote_addr = "http://127.0.0.1:48197/api/v1"
else:
HTTPConfig.remote_addr = args_dict.get("addr", "")
HTTPConfig.remote_addr = args.addr
# 设置BasicConfig参数
if args_dict.get("ak", ""):
@@ -239,9 +276,12 @@ def main():
if args_dict.get("sk", ""):
BasicConfig.sk = args_dict.get("sk", "")
print_status("传入了sk参数优先采用传入参数", "info")
BasicConfig.working_dir = working_dir
workflow_upload = args_dict.get("command") in ("workflow_upload", "wf")
# 使用远程资源启动
if args_dict["use_remote_resource"]:
if not workflow_upload and args_dict["use_remote_resource"]:
print_status("使用远程资源启动", "info")
from unilabos.app.web import http_client
@@ -254,7 +294,6 @@ def main():
BasicConfig.port = args_dict["port"] if args_dict["port"] else BasicConfig.port
BasicConfig.disable_browser = args_dict["disable_browser"] or BasicConfig.disable_browser
BasicConfig.working_dir = working_dir
BasicConfig.is_host_mode = not args_dict.get("is_slave", False)
BasicConfig.slave_no_host = args_dict.get("slave_no_host", False)
BasicConfig.upload_registry = args_dict.get("upload_registry", False)
@@ -283,9 +322,31 @@ def main():
# 注册表
lab_registry = build_registry(
args_dict["registry_path"], args_dict.get("complete_registry", False), args_dict["upload_registry"]
args_dict["registry_path"], args_dict.get("complete_registry", False), BasicConfig.upload_registry
)
if BasicConfig.upload_registry:
# 设备注册到服务端 - 需要 ak 和 sk
if BasicConfig.ak and BasicConfig.sk:
print_status("开始注册设备到服务端...", "info")
try:
register_devices_and_resources(lab_registry)
print_status("设备注册完成", "info")
except Exception as e:
print_status(f"设备注册失败: {e}", "error")
else:
print_status("未提供 ak 和 sk跳过设备注册", "info")
else:
print_status("本次启动注册表不报送云端,如果您需要联网调试,请在启动命令增加--upload_registry", "warning")
# 处理 workflow_upload 子命令
if workflow_upload:
from unilabos.workflow.wf_utils import handle_workflow_upload_command
handle_workflow_upload_command(args_dict)
print_status("工作流上传完成,程序退出", "info")
os._exit(0)
if not BasicConfig.ak or not BasicConfig.sk:
print_status("后续运行必须拥有一个实验室,请前往 https://uni-lab.bohrium.com 注册实验室!", "warning")
os._exit(1)
@@ -362,20 +423,6 @@ def main():
args_dict["devices_config"] = resource_tree_set
args_dict["graph"] = graph_res.physical_setup_graph
if BasicConfig.upload_registry:
# 设备注册到服务端 - 需要 ak 和 sk
if BasicConfig.ak and BasicConfig.sk:
print_status("开始注册设备到服务端...", "info")
try:
register_devices_and_resources(lab_registry)
print_status("设备注册完成", "info")
except Exception as e:
print_status(f"设备注册失败: {e}", "error")
else:
print_status("未提供 ak 和 sk跳过设备注册", "info")
else:
print_status("本次启动注册表不报送云端,如果您需要联网调试,请在启动命令增加--upload_registry", "warning")
if args_dict["controllers"] is not None:
args_dict["controllers_config"] = yaml.safe_load(open(args_dict["controllers"], encoding="utf-8"))
else:
@@ -390,6 +437,7 @@ def main():
comm_client = get_communication_client()
if "websocket" in args_dict["app_bridges"]:
args_dict["bridges"].append(comm_client)
def _exit(signum, frame):
comm_client.stop()
sys.exit(0)
@@ -431,16 +479,13 @@ def main():
resource_visualization.start()
except OSError as e:
if "AMENT_PREFIX_PATH" in str(e):
print_status(
f"ROS 2环境未正确设置跳过3D可视化启动。错误详情: {e}",
"warning"
)
print_status(f"ROS 2环境未正确设置跳过3D可视化启动。错误详情: {e}", "warning")
print_status(
"建议解决方案:\n"
"1. 激活Conda环境: conda activate unilab\n"
"2. 或使用 --backend simple 参数\n"
"3. 或使用 --visual disable 参数禁用可视化",
"info"
"info",
)
else:
raise

View File

@@ -51,21 +51,25 @@ class Resp(BaseModel):
class JobAddReq(BaseModel):
device_id: str = Field(examples=["Gripper"], description="device id")
action: str = Field(examples=["_execute_driver_command_async"], description="action name", default="")
action_type: str = Field(examples=["unilabos_msgs.action._str_single_input.StrSingleInput"], description="action name", default="")
action_args: dict = Field(examples=[{'string': 'string'}], description="action name", default="")
task_id: str = Field(examples=["task_id"], description="task uuid")
job_id: str = Field(examples=["job_id"], description="goal uuid")
node_id: str = Field(examples=["node_id"], description="node uuid")
server_info: dict = Field(examples=[{"send_timestamp": 1717000000.0}], description="server info")
action_type: str = Field(
examples=["unilabos_msgs.action._str_single_input.StrSingleInput"], description="action type", default=""
)
action_args: dict = Field(examples=[{"string": "string"}], description="action arguments", default_factory=dict)
task_id: str = Field(examples=["task_id"], description="task uuid (auto-generated if empty)", default="")
job_id: str = Field(examples=["job_id"], description="goal uuid (auto-generated if empty)", default="")
node_id: str = Field(examples=["node_id"], description="node uuid", default="")
server_info: dict = Field(
examples=[{"send_timestamp": 1717000000.0}],
description="server info (auto-generated if empty)",
default_factory=dict,
)
data: dict = Field(examples=[{"position": 30, "torque": 5, "action": "push_to"}], default={})
data: dict = Field(examples=[{"position": 30, "torque": 5, "action": "push_to"}], default_factory=dict)
class JobStepFinishReq(BaseModel):
token: str = Field(examples=["030944"], description="token")
request_time: str = Field(
examples=["2024-12-12 12:12:12.xxx"], description="requestTime"
)
request_time: str = Field(examples=["2024-12-12 12:12:12.xxx"], description="requestTime")
data: dict = Field(
examples=[
{
@@ -83,9 +87,7 @@ class JobStepFinishReq(BaseModel):
class JobPreintakeFinishReq(BaseModel):
token: str = Field(examples=["030944"], description="token")
request_time: str = Field(
examples=["2024-12-12 12:12:12.xxx"], description="requestTime"
)
request_time: str = Field(examples=["2024-12-12 12:12:12.xxx"], description="requestTime")
data: dict = Field(
examples=[
{
@@ -102,9 +104,7 @@ class JobPreintakeFinishReq(BaseModel):
class JobFinishReq(BaseModel):
token: str = Field(examples=["030944"], description="token")
request_time: str = Field(
examples=["2024-12-12 12:12:12.xxx"], description="requestTime"
)
request_time: str = Field(examples=["2024-12-12 12:12:12.xxx"], description="requestTime")
data: dict = Field(
examples=[
{
@@ -133,6 +133,10 @@ class JobData(BaseModel):
default=0,
description="0:UNKNOWN, 1:ACCEPTED, 2:EXECUTING, 3:CANCELING, 4:SUCCEEDED, 5:CANCELED, 6:ABORTED",
)
result: dict = Field(
default_factory=dict,
description="Job result data (available when status is SUCCEEDED/CANCELED/ABORTED)",
)
class JobStatusResp(Resp):

View File

@@ -34,14 +34,14 @@ def _get_oss_token(
client = http_client
# 构造scene参数: driver_name-exp_type
scene = f"{driver_name}-{exp_type}"
sub_path = f"{driver_name}-{exp_type}"
# 构造请求URL使用client的remote_addr已包含/api/v1/
url = f"{client.remote_addr}/applications/token"
params = {"scene": scene, "filename": filename}
params = {"sub_path": sub_path, "filename": filename, "scene": "job"}
try:
logger.info(f"[OSS] 请求预签名URL: scene={scene}, filename={filename}")
logger.info(f"[OSS] 请求预签名URL: sub_path={sub_path}, filename={filename}")
response = requests.get(url, params=params, headers={"Authorization": f"Lab {client.auth}"}, timeout=10)
if response.status_code == 200:

View File

@@ -9,13 +9,22 @@ import asyncio
import yaml
from unilabos.app.web.controler import devices, job_add, job_info
from unilabos.app.web.controller import (
devices,
job_add,
job_info,
get_online_devices,
get_device_actions,
get_action_schema,
get_all_available_actions,
)
from unilabos.app.model import (
Resp,
RespCode,
JobStatusResp,
JobAddResp,
JobAddReq,
JobData,
)
from unilabos.app.web.utils.host_utils import get_host_node_info
from unilabos.registry.registry import lab_registry
@@ -1234,6 +1243,65 @@ def get_devices():
return Resp(data=dict(data))
@api.get("/online-devices", summary="Online devices list", response_model=Resp)
def api_get_online_devices():
"""获取在线设备列表
返回当前在线的设备列表包含设备ID、命名空间、机器名等信息
"""
isok, data = get_online_devices()
if not isok:
return Resp(code=RespCode.ErrorHostNotInit, message=data.get("error", "Unknown error"))
return Resp(data=data)
@api.get("/devices/{device_id}/actions", summary="Device actions list", response_model=Resp)
def api_get_device_actions(device_id: str):
"""获取设备可用的动作列表
Args:
device_id: 设备ID
返回指定设备的所有可用动作,包含动作名称、类型、是否繁忙等信息
"""
isok, data = get_device_actions(device_id)
if not isok:
return Resp(code=RespCode.ErrorInvalidReq, message=data.get("error", "Unknown error"))
return Resp(data=data)
@api.get("/devices/{device_id}/actions/{action_name}/schema", summary="Action schema", response_model=Resp)
def api_get_action_schema(device_id: str, action_name: str):
"""获取动作的Schema详情
Args:
device_id: 设备ID
action_name: 动作名称
返回动作的参数Schema、默认值、类型等详细信息
"""
isok, data = get_action_schema(device_id, action_name)
if not isok:
return Resp(code=RespCode.ErrorInvalidReq, message=data.get("error", "Unknown error"))
return Resp(data=data)
@api.get("/actions", summary="All available actions", response_model=Resp)
def api_get_all_actions():
"""获取所有设备的可用动作
返回所有已注册设备的动作列表,包含设备信息和各动作的状态
"""
isok, data = get_all_available_actions()
if not isok:
return Resp(code=RespCode.ErrorHostNotInit, message=data.get("error", "Unknown error"))
return Resp(data=data)
@api.get("/job/{id}/status", summary="Job status", response_model=JobStatusResp)
def job_status(id: str):
"""获取任务状态"""
@@ -1244,11 +1312,22 @@ def job_status(id: str):
@api.post("/job/add", summary="Create job", response_model=JobAddResp)
def post_job_add(req: JobAddReq):
"""创建任务"""
device_id = req.device_id
if not req.data:
return Resp(code=RespCode.ErrorInvalidReq, message="Invalid request data")
# 检查必要参数device_id 和 action
if not req.device_id:
return JobAddResp(
data=JobData(jobId="", status=6),
code=RespCode.ErrorInvalidReq,
message="device_id is required",
)
action_name = req.data.get("action", req.action) if req.data else req.action
if not action_name:
return JobAddResp(
data=JobData(jobId="", status=6),
code=RespCode.ErrorInvalidReq,
message="action is required",
)
req.device_id = device_id
data = job_add(req)
return JobAddResp(data=data)

View File

@@ -76,7 +76,8 @@ class HTTPClient:
Dict[str, str]: 旧UUID到新UUID的映射关系 {old_uuid: new_uuid}
"""
with open(os.path.join(BasicConfig.working_dir, "req_resource_tree_add.json"), "w", encoding="utf-8") as f:
f.write(json.dumps({"nodes": [x for xs in resources.dump() for x in xs], "mount_uuid": mount_uuid}, indent=4))
payload = {"nodes": [x for xs in resources.dump() for x in xs], "mount_uuid": mount_uuid}
f.write(json.dumps(payload, indent=4))
# 从序列化数据中提取所有节点的UUID保存旧UUID
old_uuids = {n.res_content.uuid: n for n in resources.all_nodes}
if not self.initialized or first_add:
@@ -331,6 +332,67 @@ class HTTPClient:
logger.error(f"响应内容: {response.text}")
return None
def workflow_import(
self,
name: str,
workflow_uuid: str,
workflow_name: str,
nodes: List[Dict[str, Any]],
edges: List[Dict[str, Any]],
tags: Optional[List[str]] = None,
published: bool = False,
) -> Dict[str, Any]:
"""
导入工作流到服务器
Args:
name: 工作流名称(顶层)
workflow_uuid: 工作流UUID
workflow_name: 工作流名称data内部
nodes: 工作流节点列表
edges: 工作流边列表
tags: 工作流标签列表,默认为空列表
published: 是否发布工作流默认为False
Returns:
Dict: API响应数据包含 code 和 data (uuid, name)
"""
# target_lab_uuid 暂时使用默认值,后续由后端根据 ak/sk 获取
payload = {
"target_lab_uuid": "28c38bb0-63f6-4352-b0d8-b5b8eb1766d5",
"name": name,
"data": {
"workflow_uuid": workflow_uuid,
"workflow_name": workflow_name,
"nodes": nodes,
"edges": edges,
"tags": tags if tags is not None else [],
"published": published,
},
}
# 保存请求到文件
with open(os.path.join(BasicConfig.working_dir, "req_workflow_upload.json"), "w", encoding="utf-8") as f:
f.write(json.dumps(payload, indent=4, ensure_ascii=False))
response = requests.post(
f"{self.remote_addr}/lab/workflow/owner/import",
json=payload,
headers={"Authorization": f"Lab {self.auth}"},
timeout=60,
)
# 保存响应到文件
with open(os.path.join(BasicConfig.working_dir, "res_workflow_upload.json"), "w", encoding="utf-8") as f:
f.write(f"{response.status_code}" + "\n" + response.text)
if response.status_code == 200:
res = response.json()
if "code" in res and res["code"] != 0:
logger.error(f"导入工作流失败: {response.text}")
return res
else:
logger.error(f"导入工作流失败: {response.status_code}, {response.text}")
return {"code": response.status_code, "message": response.text}
# 创建默认客户端实例
http_client = HTTPClient()

View File

@@ -1,45 +0,0 @@
import json
import traceback
import uuid
from unilabos.app.model import JobAddReq, JobData
from unilabos.ros.nodes.presets.host_node import HostNode
from unilabos.utils.type_check import serialize_result_info
def get_resources() -> tuple:
if HostNode.get_instance() is None:
return False, "Host node not initialized"
return True, HostNode.get_instance().resources_config
def devices() -> tuple:
if HostNode.get_instance() is None:
return False, "Host node not initialized"
return True, HostNode.get_instance().devices_config
def job_info(id: str):
get_goal_status = HostNode.get_instance().get_goal_status(id)
return JobData(jobId=id, status=get_goal_status)
def job_add(req: JobAddReq) -> JobData:
if req.job_id is None:
req.job_id = str(uuid.uuid4())
action_name = req.data["action"]
action_type = req.data.get("action_type", "LocalUnknown")
action_args = req.data.get("action_kwargs", None) # 兼容老版本,后续删除
if action_args is None:
action_args = req.data.get("action_args")
else:
if "command" in action_args:
action_args = action_args["command"]
# print(f"job_add:{req.device_id} {action_name} {action_kwargs}")
try:
HostNode.get_instance().send_goal(req.device_id, action_type=action_type, action_name=action_name, action_kwargs=action_args, goal_uuid=req.job_id, server_info=req.server_info)
except Exception as e:
for bridge in HostNode.get_instance().bridges:
traceback.print_exc()
if hasattr(bridge, "publish_job_status"):
bridge.publish_job_status({}, req.job_id, "failed", serialize_result_info(traceback.format_exc(), False, {}))
return JobData(jobId=req.job_id)

View File

@@ -0,0 +1,587 @@
"""
Web API Controller
提供Web API的控制器函数处理设备、任务和动作相关的业务逻辑
"""
import threading
import time
import traceback
import uuid
from dataclasses import dataclass, field
from typing import Optional, Dict, Any, Tuple
from unilabos.app.model import JobAddReq, JobData
from unilabos.ros.nodes.presets.host_node import HostNode
from unilabos.utils import logger
@dataclass
class JobResult:
"""任务结果数据"""
job_id: str
status: int # 4:SUCCEEDED, 5:CANCELED, 6:ABORTED
result: Dict[str, Any] = field(default_factory=dict)
feedback: Dict[str, Any] = field(default_factory=dict)
timestamp: float = field(default_factory=time.time)
class JobResultStore:
"""任务结果存储(单例)"""
_instance: Optional["JobResultStore"] = None
_lock = threading.Lock()
def __init__(self):
if not hasattr(self, "_initialized"):
self._results: Dict[str, JobResult] = {}
self._results_lock = threading.RLock()
self._initialized = True
def __new__(cls):
if cls._instance is None:
with cls._lock:
if cls._instance is None:
cls._instance = super().__new__(cls)
return cls._instance
def store_result(
self, job_id: str, status: int, result: Optional[Dict[str, Any]], feedback: Optional[Dict[str, Any]] = None
):
"""存储任务结果"""
with self._results_lock:
self._results[job_id] = JobResult(
job_id=job_id,
status=status,
result=result or {},
feedback=feedback or {},
timestamp=time.time(),
)
logger.debug(f"[JobResultStore] Stored result for job {job_id[:8]}, status={status}")
def get_and_remove(self, job_id: str) -> Optional[JobResult]:
"""获取并删除任务结果"""
with self._results_lock:
result = self._results.pop(job_id, None)
if result:
logger.debug(f"[JobResultStore] Retrieved and removed result for job {job_id[:8]}")
return result
def get_result(self, job_id: str) -> Optional[JobResult]:
"""仅获取任务结果(不删除)"""
with self._results_lock:
return self._results.get(job_id)
def cleanup_old_results(self, max_age_seconds: float = 3600):
"""清理过期的结果"""
current_time = time.time()
with self._results_lock:
expired_jobs = [
job_id for job_id, result in self._results.items() if current_time - result.timestamp > max_age_seconds
]
for job_id in expired_jobs:
del self._results[job_id]
logger.debug(f"[JobResultStore] Cleaned up expired result for job {job_id[:8]}")
# 全局结果存储实例
job_result_store = JobResultStore()
def store_job_result(
job_id: str, status: str, result: Optional[Dict[str, Any]], feedback: Optional[Dict[str, Any]] = None
):
"""存储任务结果(供外部调用)
Args:
job_id: 任务ID
status: 状态字符串 ("success", "failed", "cancelled")
result: 结果数据
feedback: 反馈数据
"""
# 转换状态字符串为整数
status_map = {
"success": 4, # SUCCEEDED
"failed": 6, # ABORTED
"cancelled": 5, # CANCELED
"running": 2, # EXECUTING
}
status_int = status_map.get(status, 0)
# 只存储最终状态
if status_int in (4, 5, 6):
job_result_store.store_result(job_id, status_int, result, feedback)
def get_resources() -> Tuple[bool, Any]:
"""获取资源配置
Returns:
Tuple[bool, Any]: (是否成功, 资源配置或错误信息)
"""
host_node = HostNode.get_instance(0)
if host_node is None:
return False, "Host node not initialized"
return True, host_node.resources_config
def devices() -> Tuple[bool, Any]:
"""获取设备配置
Returns:
Tuple[bool, Any]: (是否成功, 设备配置或错误信息)
"""
host_node = HostNode.get_instance(0)
if host_node is None:
return False, "Host node not initialized"
return True, host_node.devices_config
def job_info(job_id: str, remove_after_read: bool = True) -> JobData:
"""获取任务信息
Args:
job_id: 任务ID
remove_after_read: 是否在读取后删除结果默认True
Returns:
JobData: 任务数据
"""
# 首先检查结果存储中是否有已完成的结果
if remove_after_read:
stored_result = job_result_store.get_and_remove(job_id)
else:
stored_result = job_result_store.get_result(job_id)
if stored_result:
# 有存储的结果,直接返回
return JobData(
jobId=job_id,
status=stored_result.status,
result=stored_result.result,
)
# 没有存储的结果,从 HostNode 获取当前状态
host_node = HostNode.get_instance(0)
if host_node is None:
return JobData(jobId=job_id, status=0)
get_goal_status = host_node.get_goal_status(job_id)
return JobData(jobId=job_id, status=get_goal_status)
def check_device_action_busy(device_id: str, action_name: str) -> Tuple[bool, Optional[str]]:
"""检查设备动作是否正在执行(被占用)
Args:
device_id: 设备ID
action_name: 动作名称
Returns:
Tuple[bool, Optional[str]]: (是否繁忙, 当前执行的job_id或None)
"""
host_node = HostNode.get_instance(0)
if host_node is None:
return False, None
device_action_key = f"/devices/{device_id}/{action_name}"
# 检查 _device_action_status 中是否有正在执行的任务
if device_action_key in host_node._device_action_status:
status = host_node._device_action_status[device_action_key]
if status.job_ids:
# 返回第一个正在执行的job_id
current_job_id = next(iter(status.job_ids.keys()), None)
return True, current_job_id
return False, None
def _get_action_type(device_id: str, action_name: str) -> Optional[str]:
"""从注册表自动获取动作类型
Args:
device_id: 设备ID
action_name: 动作名称
Returns:
动作类型字符串未找到返回None
"""
try:
from unilabos.ros.nodes.base_device_node import registered_devices
# 方法1: 从运行时注册设备获取
if device_id in registered_devices:
device_info = registered_devices[device_id]
base_node = device_info.get("base_node_instance")
if base_node and hasattr(base_node, "_action_value_mappings"):
action_mappings = base_node._action_value_mappings
# 尝试直接匹配或 auto- 前缀匹配
for key in [action_name, f"auto-{action_name}"]:
if key in action_mappings:
action_type = action_mappings[key].get("type")
if action_type:
# 转换为字符串格式
if hasattr(action_type, "__module__") and hasattr(action_type, "__name__"):
return f"{action_type.__module__}.{action_type.__name__}"
return str(action_type)
# 方法2: 从lab_registry获取
from unilabos.registry.registry import lab_registry
host_node = HostNode.get_instance(0)
if host_node and lab_registry:
devices_config = host_node.devices_config
device_class = None
for tree in devices_config.trees:
node = tree.root_node
if node.res_content.id == device_id:
device_class = node.res_content.klass
break
if device_class and device_class in lab_registry.device_type_registry:
device_type_info = lab_registry.device_type_registry[device_class]
class_info = device_type_info.get("class", {})
action_mappings = class_info.get("action_value_mappings", {})
for key in [action_name, f"auto-{action_name}"]:
if key in action_mappings:
action_type = action_mappings[key].get("type")
if action_type:
if hasattr(action_type, "__module__") and hasattr(action_type, "__name__"):
return f"{action_type.__module__}.{action_type.__name__}"
return str(action_type)
except Exception as e:
logger.warning(f"[Controller] Failed to get action type for {device_id}/{action_name}: {str(e)}")
return None
def job_add(req: JobAddReq) -> JobData:
"""添加任务(检查设备是否繁忙,繁忙则返回失败)
Args:
req: 任务添加请求
Returns:
JobData: 任务数据(包含状态)
"""
# 服务端自动生成 job_id 和 task_id
job_id = str(uuid.uuid4())
task_id = str(uuid.uuid4())
# 服务端自动生成 server_info
server_info = {"send_timestamp": time.time()}
host_node = HostNode.get_instance(0)
if host_node is None:
logger.error(f"[Controller] Host node not initialized for job: {job_id[:8]}")
return JobData(jobId=job_id, status=6) # 6 = ABORTED
# 解析动作信息
action_name = req.data.get("action", req.action) if req.data else req.action
action_args = req.data.get("action_kwargs") or req.data.get("action_args") if req.data else req.action_args
if action_args is None:
action_args = req.action_args or {}
elif isinstance(action_args, dict) and "command" in action_args:
action_args = action_args["command"]
# 自动获取 action_type
action_type = _get_action_type(req.device_id, action_name)
if action_type is None:
logger.error(f"[Controller] Action type not found for {req.device_id}/{action_name}")
return JobData(jobId=job_id, status=6) # ABORTED
# 检查设备动作是否繁忙
is_busy, current_job_id = check_device_action_busy(req.device_id, action_name)
if is_busy:
logger.warning(
f"[Controller] Device action busy: {req.device_id}/{action_name}, "
f"current job: {current_job_id[:8] if current_job_id else 'unknown'}"
)
# 返回失败状态status=6 表示 ABORTED
return JobData(jobId=job_id, status=6)
# 设备空闲,提交任务执行
try:
from unilabos.app.ws_client import QueueItem
device_action_key = f"/devices/{req.device_id}/{action_name}"
queue_item = QueueItem(
task_type="job_call_back_status",
device_id=req.device_id,
action_name=action_name,
task_id=task_id,
job_id=job_id,
device_action_key=device_action_key,
)
host_node.send_goal(
queue_item,
action_type=action_type,
action_kwargs=action_args,
server_info=server_info,
)
logger.info(f"[Controller] Job submitted: {job_id[:8]} -> {req.device_id}/{action_name}")
# 返回已接受状态status=1 表示 ACCEPTED
return JobData(jobId=job_id, status=1)
except ValueError as e:
# ActionClient not found 等错误
logger.error(f"[Controller] Action not available: {str(e)}")
return JobData(jobId=job_id, status=6) # ABORTED
except Exception as e:
logger.error(f"[Controller] Error submitting job: {str(e)}")
traceback.print_exc()
return JobData(jobId=job_id, status=6) # ABORTED
def get_online_devices() -> Tuple[bool, Dict[str, Any]]:
"""获取在线设备列表
Returns:
Tuple[bool, Dict]: (是否成功, 在线设备信息)
"""
host_node = HostNode.get_instance(0)
if host_node is None:
return False, {"error": "Host node not initialized"}
try:
from unilabos.ros.nodes.base_device_node import registered_devices
online_devices = {}
for device_key in host_node._online_devices:
# device_key 格式: "namespace/device_id"
parts = device_key.split("/")
if len(parts) >= 2:
device_id = parts[-1]
else:
device_id = device_key
# 获取设备详细信息
device_info = registered_devices.get(device_id, {})
machine_name = host_node.device_machine_names.get(device_id, "未知")
online_devices[device_id] = {
"device_key": device_key,
"namespace": host_node.devices_names.get(device_id, ""),
"machine_name": machine_name,
"uuid": device_info.get("uuid", "") if device_info else "",
"node_name": device_info.get("node_name", "") if device_info else "",
}
return True, {
"online_devices": online_devices,
"total_count": len(online_devices),
"timestamp": time.time(),
}
except Exception as e:
logger.error(f"[Controller] Error getting online devices: {str(e)}")
traceback.print_exc()
return False, {"error": str(e)}
def get_device_actions(device_id: str) -> Tuple[bool, Dict[str, Any]]:
"""获取设备可用的动作列表
Args:
device_id: 设备ID
Returns:
Tuple[bool, Dict]: (是否成功, 动作列表信息)
"""
host_node = HostNode.get_instance(0)
if host_node is None:
return False, {"error": "Host node not initialized"}
try:
from unilabos.ros.nodes.base_device_node import registered_devices
from unilabos.app.web.utils.action_utils import get_action_info
# 检查设备是否已注册
if device_id not in registered_devices:
return False, {"error": f"Device not found: {device_id}"}
device_info = registered_devices[device_id]
actions = device_info.get("actions", {})
actions_list = {}
for action_name, action_server in actions.items():
try:
action_info = get_action_info(action_server, action_name)
# 检查动作是否繁忙
is_busy, current_job = check_device_action_busy(device_id, action_name)
actions_list[action_name] = {
**action_info,
"is_busy": is_busy,
"current_job_id": current_job[:8] if current_job else None,
}
except Exception as e:
logger.warning(f"[Controller] Error getting action info for {action_name}: {str(e)}")
actions_list[action_name] = {
"type_name": "unknown",
"action_path": f"/devices/{device_id}/{action_name}",
"is_busy": False,
"error": str(e),
}
return True, {
"device_id": device_id,
"actions": actions_list,
"action_count": len(actions_list),
}
except Exception as e:
logger.error(f"[Controller] Error getting device actions: {str(e)}")
traceback.print_exc()
return False, {"error": str(e)}
def get_action_schema(device_id: str, action_name: str) -> Tuple[bool, Dict[str, Any]]:
"""获取动作的Schema详情
Args:
device_id: 设备ID
action_name: 动作名称
Returns:
Tuple[bool, Dict]: (是否成功, Schema信息)
"""
host_node = HostNode.get_instance(0)
if host_node is None:
return False, {"error": "Host node not initialized"}
try:
from unilabos.registry.registry import lab_registry
from unilabos.ros.nodes.base_device_node import registered_devices
result = {
"device_id": device_id,
"action_name": action_name,
"schema": None,
"goal_default": None,
"action_type": None,
"is_busy": False,
}
# 检查动作是否繁忙
is_busy, current_job = check_device_action_busy(device_id, action_name)
result["is_busy"] = is_busy
result["current_job_id"] = current_job[:8] if current_job else None
# 方法1: 从 registered_devices 获取运行时信息
if device_id in registered_devices:
device_info = registered_devices[device_id]
base_node = device_info.get("base_node_instance")
if base_node and hasattr(base_node, "_action_value_mappings"):
action_mappings = base_node._action_value_mappings
if action_name in action_mappings:
mapping = action_mappings[action_name]
result["schema"] = mapping.get("schema")
result["goal_default"] = mapping.get("goal_default")
result["action_type"] = str(mapping.get("type", ""))
# 方法2: 从 lab_registry 获取注册表信息(如果运行时没有)
if result["schema"] is None and lab_registry:
# 尝试查找设备类型
devices_config = host_node.devices_config
device_class = None
# 从配置中获取设备类型
for tree in devices_config.trees:
node = tree.root_node
if node.res_content.id == device_id:
device_class = node.res_content.klass
break
if device_class and device_class in lab_registry.device_type_registry:
device_type_info = lab_registry.device_type_registry[device_class]
class_info = device_type_info.get("class", {})
action_mappings = class_info.get("action_value_mappings", {})
# 尝试直接匹配或 auto- 前缀匹配
for key in [action_name, f"auto-{action_name}"]:
if key in action_mappings:
mapping = action_mappings[key]
result["schema"] = mapping.get("schema")
result["goal_default"] = mapping.get("goal_default")
result["action_type"] = str(mapping.get("type", ""))
result["handles"] = mapping.get("handles", {})
result["placeholder_keys"] = mapping.get("placeholder_keys", {})
break
if result["schema"] is None:
return False, {"error": f"Action schema not found: {device_id}/{action_name}"}
return True, result
except Exception as e:
logger.error(f"[Controller] Error getting action schema: {str(e)}")
traceback.print_exc()
return False, {"error": str(e)}
def get_all_available_actions() -> Tuple[bool, Dict[str, Any]]:
"""获取所有设备的可用动作
Returns:
Tuple[bool, Dict]: (是否成功, 所有设备的动作信息)
"""
host_node = HostNode.get_instance(0)
if host_node is None:
return False, {"error": "Host node not initialized"}
try:
from unilabos.ros.nodes.base_device_node import registered_devices
from unilabos.app.web.utils.action_utils import get_action_info
all_actions = {}
total_action_count = 0
for device_id, device_info in registered_devices.items():
actions = device_info.get("actions", {})
device_actions = {}
for action_name, action_server in actions.items():
try:
action_info = get_action_info(action_server, action_name)
is_busy, current_job = check_device_action_busy(device_id, action_name)
device_actions[action_name] = {
"type_name": action_info.get("type_name", ""),
"action_path": action_info.get("action_path", ""),
"is_busy": is_busy,
"current_job_id": current_job[:8] if current_job else None,
}
total_action_count += 1
except Exception as e:
logger.warning(f"[Controller] Error processing action {device_id}/{action_name}: {str(e)}")
if device_actions:
all_actions[device_id] = {
"actions": device_actions,
"action_count": len(device_actions),
"machine_name": host_node.device_machine_names.get(device_id, "未知"),
}
return True, {
"devices": all_actions,
"device_count": len(all_actions),
"total_action_count": total_action_count,
"timestamp": time.time(),
}
except Exception as e:
logger.error(f"[Controller] Error getting all available actions: {str(e)}")
traceback.print_exc()
return False, {"error": str(e)}

View File

@@ -389,7 +389,7 @@ class MessageProcessor:
self.is_running = True
self.thread = threading.Thread(target=self._run, daemon=True, name="MessageProcessor")
self.thread.start()
logger.info("[MessageProcessor] Started")
logger.trace("[MessageProcessor] Started")
def stop(self) -> None:
"""停止消息处理线程"""
@@ -438,7 +438,7 @@ class MessageProcessor:
self.connected = True
self.reconnect_count = 0
logger.info(f"[MessageProcessor] Connected to {self.websocket_url}")
logger.trace(f"[MessageProcessor] Connected to {self.websocket_url}")
# 启动发送协程
send_task = asyncio.create_task(self._send_handler())
@@ -503,7 +503,7 @@ class MessageProcessor:
async def _send_handler(self):
"""处理发送队列中的消息"""
logger.debug("[MessageProcessor] Send handler started")
logger.trace("[MessageProcessor] Send handler started")
try:
while self.connected and self.websocket:
@@ -939,7 +939,7 @@ class QueueProcessor:
# 事件通知机制
self.queue_update_event = threading.Event()
logger.info("[QueueProcessor] Initialized")
logger.trace("[QueueProcessor] Initialized")
def set_websocket_client(self, websocket_client: "WebSocketClient"):
"""设置WebSocket客户端引用"""
@@ -954,7 +954,7 @@ class QueueProcessor:
self.is_running = True
self.thread = threading.Thread(target=self._run, daemon=True, name="QueueProcessor")
self.thread.start()
logger.info("[QueueProcessor] Started")
logger.trace("[QueueProcessor] Started")
def stop(self) -> None:
"""停止队列处理线程"""
@@ -965,7 +965,7 @@ class QueueProcessor:
def _run(self):
"""运行队列处理主循环"""
logger.debug("[QueueProcessor] Queue processor started")
logger.trace("[QueueProcessor] Queue processor started")
while self.is_running:
try:
@@ -1175,7 +1175,6 @@ class WebSocketClient(BaseCommunicationClient):
else:
url = f"{scheme}://{parsed.netloc}/api/v1/ws/schedule"
logger.debug(f"[WebSocketClient] URL: {url}")
return url
def start(self) -> None:
@@ -1188,13 +1187,11 @@ class WebSocketClient(BaseCommunicationClient):
logger.error("[WebSocketClient] WebSocket URL not configured")
return
logger.info(f"[WebSocketClient] Starting connection to {self.websocket_url}")
# 启动两个核心线程
self.message_processor.start()
self.queue_processor.start()
logger.info("[WebSocketClient] All threads started")
logger.trace("[WebSocketClient] All threads started")
def stop(self) -> None:
"""停止WebSocket客户端"""
@@ -1314,3 +1311,19 @@ class WebSocketClient(BaseCommunicationClient):
logger.info(f"[WebSocketClient] Job {job_log} cancelled successfully")
else:
logger.warning(f"[WebSocketClient] Failed to cancel job {job_log}")
def publish_host_ready(self) -> None:
"""发布host_node ready信号"""
if self.is_disabled or not self.is_connected():
logger.debug("[WebSocketClient] Not connected, cannot publish host ready signal")
return
message = {
"action": "host_node_ready",
"data": {
"status": "ready",
"timestamp": time.time(),
},
}
self.message_processor.send_message(message)
logger.info("[WebSocketClient] Host node ready signal published")

View File

@@ -21,7 +21,8 @@ class BasicConfig:
startup_json_path = None # 填写绝对路径
disable_browser = False # 禁止浏览器自动打开
port = 8002 # 本地HTTP服务
log_level: Literal['TRACE', 'DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL'] = "DEBUG" # 'TRACE', 'DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL'
# 'TRACE', 'DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL'
log_level: Literal["TRACE", "DEBUG", "INFO", "WARNING", "ERROR", "CRITICAL"] = "DEBUG"
@classmethod
def auth_secret(cls):
@@ -41,7 +42,7 @@ class WSConfig:
# HTTP配置
class HTTPConfig:
remote_addr = "http://127.0.0.1:48197/api/v1"
remote_addr = "https://uni-lab.bohrium.com/api/v1"
# ROS配置
@@ -65,13 +66,14 @@ def _update_config_from_module(module):
if not attr.startswith("_"):
setattr(obj, attr, getattr(getattr(module, name), attr))
def _update_config_from_env():
prefix = "UNILABOS_"
for env_key, env_value in os.environ.items():
if not env_key.startswith(prefix):
continue
try:
key_path = env_key[len(prefix):] # Remove UNILAB_ prefix
key_path = env_key[len(prefix) :] # Remove UNILAB_ prefix
class_field = key_path.upper().split("_", 1)
if len(class_field) != 2:
logger.warning(f"[ENV] 环境变量格式不正确:{env_key}")

View File

View File

@@ -0,0 +1,712 @@
#!/usr/bin/env python3
import asyncio
import json
import subprocess
import sys
import threading
from typing import Optional, Dict, Any
import logging
import requests
import websockets
logging.getLogger("zeep").setLevel(logging.WARNING)
logging.getLogger("zeep.xsd.schema").setLevel(logging.WARNING)
logging.getLogger("zeep.xsd.schema.schema").setLevel(logging.WARNING)
from onvif import ONVIFCamera # 新增ONVIF PTZ 控制
# ======================= 独立的 PTZController =======================
class PTZController:
def __init__(self, host: str, port: int, user: str, password: str):
"""
:param host: 摄像机 IP 或域名(和 RTSP 的一样即可)
:param port: ONVIF 端口(多数为 80看你的设备
:param user: 摄像机用户名
:param password: 摄像机密码
"""
self.host = host
self.port = port
self.user = user
self.password = password
self.cam: Optional[ONVIFCamera] = None
self.media_service = None
self.ptz_service = None
self.profile = None
def connect(self) -> bool:
"""
建立 ONVIF 连接并初始化 PTZ 能力,失败返回 False不抛异常
Note: 首先 pip install onvif-zeep
"""
try:
self.cam = ONVIFCamera(self.host, self.port, self.user, self.password)
self.media_service = self.cam.create_media_service()
self.ptz_service = self.cam.create_ptz_service()
profiles = self.media_service.GetProfiles()
if not profiles:
print("[PTZ] No media profiles found on camera.", file=sys.stderr)
return False
self.profile = profiles[0]
return True
except Exception as e:
print(f"[PTZ] Failed to init ONVIF PTZ: {e}", file=sys.stderr)
return False
def _continuous_move(self, pan: float, tilt: float, zoom: float, duration: float) -> bool:
"""
连续移动一段时间(秒),之后自动停止。
此函数为阻塞模式:只有在 Stop 调用结束后,才返回 True/False。
"""
if not self.ptz_service or not self.profile:
print("[PTZ] _continuous_move: ptz_service or profile not ready", file=sys.stderr)
return False
# 进入前先强行停一下,避免前一次残留动作
self._force_stop()
req = self.ptz_service.create_type("ContinuousMove")
req.ProfileToken = self.profile.token
req.Velocity = {
"PanTilt": {"x": pan, "y": tilt},
"Zoom": {"x": zoom},
}
try:
print(f"[PTZ] ContinuousMove start: pan={pan}, tilt={tilt}, zoom={zoom}, duration={duration}", file=sys.stderr)
self.ptz_service.ContinuousMove(req)
except Exception as e:
print(f"[PTZ] ContinuousMove failed: {e}", file=sys.stderr)
return False
# 阻塞等待:这里决定“运动时间”
import time
wait_seconds = max(2 * duration, 0.0)
time.sleep(wait_seconds)
# 运动完成后强制停止
return self._force_stop()
def stop(self) -> bool:
"""
阻塞调用 Stop带重试成功 True失败 False。
"""
return self._force_stop()
# ------- 对外动作接口(给 CameraController 调用) -------
# 所有接口都为“阻塞模式”:只有在运动 + Stop 完成后才返回 True/False
def move_up(self, speed: float = 0.5, duration: float = 1.0) -> bool:
print(f"[PTZ] move_up called, speed={speed}, duration={duration}", file=sys.stderr)
return self._continuous_move(pan=0.0, tilt=+speed, zoom=0.0, duration=duration)
def move_down(self, speed: float = 0.5, duration: float = 1.0) -> bool:
print(f"[PTZ] move_down called, speed={speed}, duration={duration}", file=sys.stderr)
return self._continuous_move(pan=0.0, tilt=-speed, zoom=0.0, duration=duration)
def move_left(self, speed: float = 0.2, duration: float = 1.0) -> bool:
print(f"[PTZ] move_left called, speed={speed}, duration={duration}", file=sys.stderr)
return self._continuous_move(pan=-speed, tilt=0.0, zoom=0.0, duration=duration)
def move_right(self, speed: float = 0.2, duration: float = 1.0) -> bool:
print(f"[PTZ] move_right called, speed={speed}, duration={duration}", file=sys.stderr)
return self._continuous_move(pan=+speed, tilt=0.0, zoom=0.0, duration=duration)
# ------- 占位的变倍接口(当前设备不支持) -------
def zoom_in(self, speed: float = 0.2, duration: float = 1.0) -> bool:
"""
当前设备不支持变倍;保留方法只是避免上层调用时报错。
"""
print("[PTZ] zoom_in is disabled for this device.", file=sys.stderr)
return False
def zoom_out(self, speed: float = 0.2, duration: float = 1.0) -> bool:
"""
当前设备不支持变倍;保留方法只是避免上层调用时报错。
"""
print("[PTZ] zoom_out is disabled for this device.", file=sys.stderr)
return False
def _force_stop(self, retries: int = 3, delay: float = 0.1) -> bool:
"""
尝试多次调用 Stop作为“强制停止”手段。
:param retries: 重试次数
:param delay: 每次重试间隔(秒)
"""
if not self.ptz_service or not self.profile:
print("[PTZ] _force_stop: ptz_service or profile not ready", file=sys.stderr)
return False
import time
last_error = None
for i in range(retries):
try:
print(f"[PTZ] _force_stop: calling Stop(), attempt={i+1}", file=sys.stderr)
self.ptz_service.Stop({"ProfileToken": self.profile.token})
print("[PTZ] _force_stop: Stop() returned OK", file=sys.stderr)
return True
except Exception as e:
last_error = e
print(f"[PTZ] _force_stop: Stop() failed at attempt {i+1}: {e}", file=sys.stderr)
time.sleep(delay)
print(f"[PTZ] _force_stop: all {retries} attempts failed, last error: {last_error}", file=sys.stderr)
return False
# ======================= CameraController加入 PTZ =======================
class CameraController:
"""
Uni-Lab-OS 摄像头驱动driver 形式)
启动 Uni-Lab-OS 后,立即开始推流
- WebSocket 信令:通过 signal_backend_url 连接到后端
例如: wss://sciol.ac.cn/api/realtime/signal/host/<host_id>
- 媒体服务器:通过 rtmp_url / webrtc_api / webrtc_stream_url
当前配置为 SRS与独立 HostSimulator 独立运行脚本保持一致。
"""
def __init__(
self,
host_id: str = "demo-host",
# 1信令后端WebSocket
signal_backend_url: str = "wss://sciol.ac.cn/api/realtime/signal/host",
# 2媒体后端RTMP + WebRTC API
rtmp_url: str = "rtmp://srs.sciol.ac.cn:4499/live/camera-01",
webrtc_api: str = "https://srs.sciol.ac.cn/rtc/v1/play/",
webrtc_stream_url: str = "webrtc://srs.sciol.ac.cn:4500/live/camera-01",
camera_rtsp_url: str = "",
# 3PTZ 控制相关ONVIF
ptz_host: str = "", # 一般就是摄像头 IP比如 "192.168.31.164"
ptz_port: int = 80, # ONVIF 端口,不一定是 80按实际情况改
ptz_user: str = "", # admin
ptz_password: str = "", # admin123
):
self.host_id = host_id
self.camera_rtsp_url = camera_rtsp_url
# 拼接最终的 WebSocket URL.../host/<host_id>
signal_backend_url = signal_backend_url.rstrip("/")
if not signal_backend_url.endswith("/host"):
signal_backend_url = signal_backend_url + "/host"
self.signal_backend_url = f"{signal_backend_url}/{host_id}"
# 媒体服务器配置
self.rtmp_url = rtmp_url
self.webrtc_api = webrtc_api
self.webrtc_stream_url = webrtc_stream_url
# PTZ 控制
self.ptz_host = ptz_host
self.ptz_port = ptz_port
self.ptz_user = ptz_user
self.ptz_password = ptz_password
self._ptz: Optional[PTZController] = None
self._init_ptz_if_possible()
# 运行时状态
self._ws: Optional[object] = None
self._ffmpeg_process: Optional[subprocess.Popen] = None
self._running = False
self._loop_task: Optional[asyncio.Future] = None
# 事件循环 & 线程
self._loop: Optional[asyncio.AbstractEventLoop] = None
self._loop_thread: Optional[threading.Thread] = None
try:
self.start()
except Exception as e:
print(f"[CameraController] __init__ auto start failed: {e}", file=sys.stderr)
# ------------------------ PTZ 初始化 ------------------------
# ------------------------ PTZ 公开动作方法(一个动作一个函数) ------------------------
def ptz_move_up(self, speed: float = 0.5, duration: float = 1.0) -> bool:
print(f"[CameraController] ptz_move_up called, speed={speed}, duration={duration}")
return self._ptz.move_up(speed=speed, duration=duration)
def ptz_move_down(self, speed: float = 0.5, duration: float = 1.0) -> bool:
print(f"[CameraController] ptz_move_down called, speed={speed}, duration={duration}")
return self._ptz.move_down(speed=speed, duration=duration)
def ptz_move_left(self, speed: float = 0.2, duration: float = 1.0) -> bool:
print(f"[CameraController] ptz_move_left called, speed={speed}, duration={duration}")
return self._ptz.move_left(speed=speed, duration=duration)
def ptz_move_right(self, speed: float = 0.2, duration: float = 1.0) -> bool:
print(f"[CameraController] ptz_move_right called, speed={speed}, duration={duration}")
return self._ptz.move_right(speed=speed, duration=duration)
def zoom_in(self, speed: float = 0.2, duration: float = 1.0) -> bool:
"""
当前设备不支持变倍;保留方法只是避免上层调用时报错。
"""
print("[PTZ] zoom_in is disabled for this device.", file=sys.stderr)
return False
def zoom_out(self, speed: float = 0.2, duration: float = 1.0) -> bool:
"""
当前设备不支持变倍;保留方法只是避免上层调用时报错。
"""
print("[PTZ] zoom_out is disabled for this device.", file=sys.stderr)
return False
def ptz_stop(self):
if self._ptz is None:
print("[CameraController] PTZ not initialized.", file=sys.stderr)
return
self._ptz.stop()
def _init_ptz_if_possible(self):
"""
根据 ptz_host / user / password 初始化 PTZ
如果配置信息不全则不启用 PTZ静默
"""
if not (self.ptz_host and self.ptz_user and self.ptz_password):
return
ctrl = PTZController(
host=self.ptz_host,
port=self.ptz_port,
user=self.ptz_user,
password=self.ptz_password,
)
if ctrl.connect():
self._ptz = ctrl
else:
self._ptz = None
# ---------------------------------------------------------------------
# 对外暴露的方法:供 Uni-Lab-OS 调用
# ---------------------------------------------------------------------
def start(self, config: Optional[Dict[str, Any]] = None):
"""
启动 Camera 连接 & 消息循环,并在启动时就开启 FFmpeg 推流,
"""
if self._running:
return {"status": "already_running", "host_id": self.host_id}
# 应用 config 覆盖(如果有)
if config:
self.camera_rtsp_url = config.get("camera_rtsp_url", self.camera_rtsp_url)
cfg_host_id = config.get("host_id")
if cfg_host_id:
self.host_id = cfg_host_id
signal_backend_url = config.get("signal_backend_url")
if signal_backend_url:
signal_backend_url = signal_backend_url.rstrip("/")
if not signal_backend_url.endswith("/host"):
signal_backend_url = signal_backend_url + "/host"
self.signal_backend_url = f"{signal_backend_url}/{self.host_id}"
self.rtmp_url = config.get("rtmp_url", self.rtmp_url)
self.webrtc_api = config.get("webrtc_api", self.webrtc_api)
self.webrtc_stream_url = config.get(
"webrtc_stream_url", self.webrtc_stream_url
)
# PTZ 相关配置也允许通过 config 注入
self.ptz_host = config.get("ptz_host", self.ptz_host)
self.ptz_port = int(config.get("ptz_port", self.ptz_port))
self.ptz_user = config.get("ptz_user", self.ptz_user)
self.ptz_password = config.get("ptz_password", self.ptz_password)
self._init_ptz_if_possible()
self._running = True
# === start 时启动 FFmpeg 推流 ===
self._start_ffmpeg()
# 创建新的事件循环和线程(用于 WebSocket 信令)
self._loop = asyncio.new_event_loop()
def loop_runner(loop: asyncio.AbstractEventLoop):
asyncio.set_event_loop(loop)
try:
loop.run_forever()
except Exception as e:
print(f"[CameraController] event loop error: {e}", file=sys.stderr)
self._loop_thread = threading.Thread(
target=loop_runner, args=(self._loop,), daemon=True
)
self._loop_thread.start()
self._loop_task = asyncio.run_coroutine_threadsafe(
self._run_main_loop(), self._loop
)
return {
"status": "started",
"host_id": self.host_id,
"signal_backend_url": self.signal_backend_url,
"rtmp_url": self.rtmp_url,
"webrtc_api": self.webrtc_api,
"webrtc_stream_url": self.webrtc_stream_url,
}
def stop(self) -> Dict[str, Any]:
"""
停止推流 & 断开 WebSocket并关闭事件循环线程。
"""
self._running = False
self._stop_ffmpeg()
if self._ws and self._loop is not None:
async def close_ws():
try:
await self._ws.close()
except Exception as e:
print(
f"[CameraController] error when closing WebSocket: {e}",
file=sys.stderr,
)
asyncio.run_coroutine_threadsafe(close_ws(), self._loop)
if self._loop_task is not None:
if not self._loop_task.done():
self._loop_task.cancel()
try:
self._loop_task.result()
except asyncio.CancelledError:
pass
except Exception as e:
print(
f"[CameraController] main loop task error in stop(): {e}",
file=sys.stderr,
)
finally:
self._loop_task = None
if self._loop is not None:
try:
self._loop.call_soon_threadsafe(self._loop.stop)
except Exception as e:
print(
f"[CameraController] error when stopping event loop: {e}",
file=sys.stderr,
)
if self._loop_thread is not None:
try:
self._loop_thread.join(timeout=5)
except Exception as e:
print(
f"[CameraController] error when joining loop thread: {e}",
file=sys.stderr,
)
finally:
self._loop_thread = None
self._ws = None
self._loop = None
return {"status": "stopped", "host_id": self.host_id}
def get_status(self) -> Dict[str, Any]:
"""
查询当前状态,方便在 Uni-Lab-OS 中做监控。
"""
ws_closed = None
if self._ws is not None:
ws_closed = getattr(self._ws, "closed", None)
if ws_closed is None:
websocket_connected = self._ws is not None
else:
websocket_connected = (self._ws is not None) and (not ws_closed)
return {
"host_id": self.host_id,
"running": self._running,
"websocket_connected": websocket_connected,
"ffmpeg_running": bool(
self._ffmpeg_process and self._ffmpeg_process.poll() is None
),
"signal_backend_url": self.signal_backend_url,
"rtmp_url": self.rtmp_url,
}
# ---------------------------------------------------------------------
# 内部实现逻辑WebSocket 循环 / FFmpeg / WebRTC Offer 处理
# ---------------------------------------------------------------------
async def _run_main_loop(self):
try:
while self._running:
try:
async with websockets.connect(self.signal_backend_url) as ws:
self._ws = ws
await self._recv_loop()
except asyncio.CancelledError:
raise
except Exception as e:
if self._running:
print(
f"[CameraController] WebSocket connection error: {e}",
file=sys.stderr,
)
await asyncio.sleep(3)
except asyncio.CancelledError:
pass
async def _recv_loop(self):
assert self._ws is not None
ws = self._ws
async for message in ws:
try:
data = json.loads(message)
except json.JSONDecodeError:
print(
f"[CameraController] received non-JSON message: {message}",
file=sys.stderr,
)
continue
try:
await self._handle_message(data)
except Exception as e:
print(
f"[CameraController] error while handling message {data}: {e}",
file=sys.stderr,
)
async def _handle_message(self, data: Dict[str, Any]):
"""
处理来自信令后端的消息:
- command: start_stream / stop_stream / ptz_xxx
- type: offer (WebRTC)
"""
cmd = data.get("command")
# ---------- 推流控制 ----------
if cmd == "start_stream":
try:
self._start_ffmpeg()
except Exception as e:
print(
f"[CameraController] error when starting FFmpeg on start_stream: {e}",
file=sys.stderr,
)
return
if cmd == "stop_stream":
try:
self._stop_ffmpeg()
except Exception as e:
print(
f"[CameraController] error when stopping FFmpeg on stop_stream: {e}",
file=sys.stderr,
)
return
# # ---------- PTZ 控制 ----------
# # 例如信令可以发:
# # {"command": "ptz_move", "direction": "down", "speed": 0.5, "duration": 0.5}
# if cmd == "ptz_move":
# if self._ptz is None:
# # 没有初始化 PTZ静默忽略或打印一条
# print("[CameraController] PTZ not initialized.", file=sys.stderr)
# return
# direction = data.get("direction", "")
# speed = float(data.get("speed", 0.5))
# duration = float(data.get("duration", 0.5))
# try:
# if direction == "up":
# self._ptz.move_up(speed=speed, duration=duration)
# elif direction == "down":
# self._ptz.move_down(speed=speed, duration=duration)
# elif direction == "left":
# self._ptz.move_left(speed=speed, duration=duration)
# elif direction == "right":
# self._ptz.move_right(speed=speed, duration=duration)
# elif direction == "zoom_in":
# self._ptz.zoom_in(speed=speed, duration=duration)
# elif direction == "zoom_out":
# self._ptz.zoom_out(speed=speed, duration=duration)
# elif direction == "stop":
# self._ptz.stop()
# else:
# # 未知方向,忽略
# pass
# except Exception as e:
# print(
# f"[CameraController] error when handling PTZ move: {e}",
# file=sys.stderr,
# )
# return
# ---------- WebRTC Offer ----------
if data.get("type") == "offer":
offer_sdp = data.get("sdp", "")
camera_id = data.get("cameraId", "camera-01")
try:
answer_sdp = await self._handle_webrtc_offer(offer_sdp)
except Exception as e:
print(
f"[CameraController] error when handling WebRTC offer: {e}",
file=sys.stderr,
)
return
if self._ws:
answer_payload = {
"type": "answer",
"sdp": answer_sdp,
"cameraId": camera_id,
"hostId": self.host_id,
}
try:
await self._ws.send(json.dumps(answer_payload))
except Exception as e:
print(
f"[CameraController] error when sending WebRTC answer: {e}",
file=sys.stderr,
)
# ------------------------ FFmpeg 相关 ------------------------
def _start_ffmpeg(self):
if self._ffmpeg_process and self._ffmpeg_process.poll() is None:
return
cmd = [
"ffmpeg",
"-rtsp_transport", "tcp",
"-i", self.camera_rtsp_url,
"-c:v", "libx264",
"-preset", "ultrafast",
"-tune", "zerolatency",
"-profile:v", "baseline",
"-b:v", "1M",
"-maxrate", "1M",
"-bufsize", "2M",
"-g", "10",
"-keyint_min", "10",
"-sc_threshold", "0",
"-pix_fmt", "yuv420p",
"-x264-params", "bframes=0",
"-c:a", "aac",
"-ar", "44100",
"-ac", "1",
"-b:a", "64k",
"-f", "flv",
self.rtmp_url,
]
try:
self._ffmpeg_process = subprocess.Popen(
cmd,
stdout=subprocess.DEVNULL,
stderr=subprocess.STDOUT,
shell=False,
)
except Exception as e:
print(f"[CameraController] failed to start FFmpeg: {e}", file=sys.stderr)
self._ffmpeg_process = None
raise
def _stop_ffmpeg(self):
proc = self._ffmpeg_process
if proc and proc.poll() is None:
try:
proc.terminate()
try:
proc.wait(timeout=5)
except subprocess.TimeoutExpired:
try:
proc.kill()
try:
proc.wait(timeout=2)
except subprocess.TimeoutExpired:
print(
f"[CameraController] FFmpeg process did not exit even after kill (pid={proc.pid})",
file=sys.stderr,
)
except Exception as e:
print(
f"[CameraController] failed to kill FFmpeg process: {e}",
file=sys.stderr,
)
except Exception as e:
print(
f"[CameraController] error when stopping FFmpeg: {e}",
file=sys.stderr,
)
self._ffmpeg_process = None
# ------------------------ WebRTC Offer 相关 ------------------------
async def _handle_webrtc_offer(self, offer_sdp: str) -> str:
payload = {
"api": self.webrtc_api,
"streamurl": self.webrtc_stream_url,
"sdp": offer_sdp,
}
headers = {"Content-Type": "application/json"}
def _do_request():
return requests.post(
self.webrtc_api,
json=payload,
headers=headers,
timeout=10,
)
try:
loop = asyncio.get_running_loop()
resp = await loop.run_in_executor(None, _do_request)
except Exception as e:
print(
f"[CameraController] failed to send offer to media server: {e}",
file=sys.stderr,
)
raise
try:
resp.raise_for_status()
except Exception as e:
print(
f"[CameraController] media server HTTP error: {e}, "
f"status={resp.status_code}, body={resp.text[:200]}",
file=sys.stderr,
)
raise
try:
data = resp.json()
except Exception as e:
print(
f"[CameraController] failed to parse media server JSON: {e}, "
f"raw={resp.text[:200]}",
file=sys.stderr,
)
raise
answer_sdp = data.get("sdp", "")
if not answer_sdp:
msg = f"empty SDP from media server: {data}"
print(f"[CameraController] {msg}", file=sys.stderr)
raise RuntimeError(msg)
return answer_sdp

View File

@@ -0,0 +1,401 @@
#!/usr/bin/env python3
import asyncio
import json
import subprocess
import sys
import threading
from typing import Optional, Dict, Any
import requests
import websockets
class CameraController:
"""
Uni-Lab-OS 摄像头驱动Linux USB 摄像头版,无 PTZ
- WebSocket 信令signal_backend_url 连接到后端
例如: wss://sciol.ac.cn/api/realtime/signal/host/<host_id>
- 媒体服务器RTMP 推流到 rtmp_urlWebRTC offer 转发到 SRS 的 webrtc_api
- 视频源:本地 USB 摄像头V4L2默认 /dev/video0
"""
def __init__(
self,
host_id: str = "demo-host",
signal_backend_url: str = "wss://sciol.ac.cn/api/realtime/signal/host",
rtmp_url: str = "rtmp://srs.sciol.ac.cn:4499/live/camera-01",
webrtc_api: str = "https://srs.sciol.ac.cn/rtc/v1/play/",
webrtc_stream_url: str = "webrtc://srs.sciol.ac.cn:4500/live/camera-01",
video_device: str = "/dev/video0",
width: int = 1280,
height: int = 720,
fps: int = 30,
video_bitrate: str = "1500k",
audio_device: Optional[str] = None, # 比如 "hw:1,0",没有音频就保持 None
audio_bitrate: str = "64k",
):
self.host_id = host_id
# 拼接最终 WebSocket URL.../host/<host_id>
signal_backend_url = signal_backend_url.rstrip("/")
if not signal_backend_url.endswith("/host"):
signal_backend_url = signal_backend_url + "/host"
self.signal_backend_url = f"{signal_backend_url}/{host_id}"
# 媒体服务器配置
self.rtmp_url = rtmp_url
self.webrtc_api = webrtc_api
self.webrtc_stream_url = webrtc_stream_url
# 本地采集配置
self.video_device = video_device
self.width = int(width)
self.height = int(height)
self.fps = int(fps)
self.video_bitrate = video_bitrate
self.audio_device = audio_device
self.audio_bitrate = audio_bitrate
# 运行时状态
self._ws: Optional[object] = None
self._ffmpeg_process: Optional[subprocess.Popen] = None
self._running = False
self._loop_task: Optional[asyncio.Future] = None
# 事件循环 & 线程
self._loop: Optional[asyncio.AbstractEventLoop] = None
self._loop_thread: Optional[threading.Thread] = None
try:
self.start()
except Exception as e:
print(f"[CameraController] __init__ auto start failed: {e}", file=sys.stderr)
# ---------------------------------------------------------------------
# 对外方法
# ---------------------------------------------------------------------
def start(self, config: Optional[Dict[str, Any]] = None):
if self._running:
return {"status": "already_running", "host_id": self.host_id}
# 应用 config 覆盖(如果有)
if config:
cfg_host_id = config.get("host_id")
if cfg_host_id:
self.host_id = cfg_host_id
signal_backend_url = config.get("signal_backend_url")
if signal_backend_url:
signal_backend_url = signal_backend_url.rstrip("/")
if not signal_backend_url.endswith("/host"):
signal_backend_url = signal_backend_url + "/host"
self.signal_backend_url = f"{signal_backend_url}/{self.host_id}"
self.rtmp_url = config.get("rtmp_url", self.rtmp_url)
self.webrtc_api = config.get("webrtc_api", self.webrtc_api)
self.webrtc_stream_url = config.get("webrtc_stream_url", self.webrtc_stream_url)
self.video_device = config.get("video_device", self.video_device)
self.width = int(config.get("width", self.width))
self.height = int(config.get("height", self.height))
self.fps = int(config.get("fps", self.fps))
self.video_bitrate = config.get("video_bitrate", self.video_bitrate)
self.audio_device = config.get("audio_device", self.audio_device)
self.audio_bitrate = config.get("audio_bitrate", self.audio_bitrate)
self._running = True
print("[CameraController] start(): starting FFmpeg streaming...", file=sys.stderr)
self._start_ffmpeg()
self._loop = asyncio.new_event_loop()
def loop_runner(loop: asyncio.AbstractEventLoop):
asyncio.set_event_loop(loop)
try:
loop.run_forever()
except Exception as e:
print(f"[CameraController] event loop error: {e}", file=sys.stderr)
self._loop_thread = threading.Thread(target=loop_runner, args=(self._loop,), daemon=True)
self._loop_thread.start()
self._loop_task = asyncio.run_coroutine_threadsafe(self._run_main_loop(), self._loop)
return {
"status": "started",
"host_id": self.host_id,
"signal_backend_url": self.signal_backend_url,
"rtmp_url": self.rtmp_url,
"webrtc_api": self.webrtc_api,
"webrtc_stream_url": self.webrtc_stream_url,
"video_device": self.video_device,
"width": self.width,
"height": self.height,
"fps": self.fps,
"video_bitrate": self.video_bitrate,
"audio_device": self.audio_device,
}
def stop(self) -> Dict[str, Any]:
self._running = False
# 先取消主任务(让 ws connect/sleep 尽快退出)
if self._loop_task is not None and not self._loop_task.done():
self._loop_task.cancel()
# 停止推流
self._stop_ffmpeg()
# 关闭 WebSocket在 loop 中执行)
if self._ws and self._loop is not None:
async def close_ws():
try:
await self._ws.close()
except Exception as e:
print(f"[CameraController] error closing WebSocket: {e}", file=sys.stderr)
try:
asyncio.run_coroutine_threadsafe(close_ws(), self._loop)
except Exception:
pass
# 停止事件循环
if self._loop is not None:
try:
self._loop.call_soon_threadsafe(self._loop.stop)
except Exception as e:
print(f"[CameraController] error stopping loop: {e}", file=sys.stderr)
# 等待线程退出
if self._loop_thread is not None:
try:
self._loop_thread.join(timeout=5)
except Exception as e:
print(f"[CameraController] error joining loop thread: {e}", file=sys.stderr)
self._ws = None
self._loop_task = None
self._loop = None
self._loop_thread = None
return {"status": "stopped", "host_id": self.host_id}
def get_status(self) -> Dict[str, Any]:
ws_closed = None
if self._ws is not None:
ws_closed = getattr(self._ws, "closed", None)
if ws_closed is None:
websocket_connected = self._ws is not None
else:
websocket_connected = (self._ws is not None) and (not ws_closed)
return {
"host_id": self.host_id,
"running": self._running,
"websocket_connected": websocket_connected,
"ffmpeg_running": bool(self._ffmpeg_process and self._ffmpeg_process.poll() is None),
"signal_backend_url": self.signal_backend_url,
"rtmp_url": self.rtmp_url,
"video_device": self.video_device,
"width": self.width,
"height": self.height,
"fps": self.fps,
"video_bitrate": self.video_bitrate,
}
# ---------------------------------------------------------------------
# WebSocket / 信令
# ---------------------------------------------------------------------
async def _run_main_loop(self):
print("[CameraController] main loop started", file=sys.stderr)
try:
while self._running:
try:
async with websockets.connect(self.signal_backend_url) as ws:
self._ws = ws
print(f"[CameraController] WebSocket connected: {self.signal_backend_url}", file=sys.stderr)
await self._recv_loop()
except asyncio.CancelledError:
raise
except Exception as e:
if self._running:
print(f"[CameraController] WebSocket connection error: {e}", file=sys.stderr)
await asyncio.sleep(3)
except asyncio.CancelledError:
pass
finally:
print("[CameraController] main loop exited", file=sys.stderr)
async def _recv_loop(self):
assert self._ws is not None
ws = self._ws
async for message in ws:
try:
data = json.loads(message)
except json.JSONDecodeError:
print(f"[CameraController] non-JSON message: {message}", file=sys.stderr)
continue
try:
await self._handle_message(data)
except Exception as e:
print(f"[CameraController] error handling message {data}: {e}", file=sys.stderr)
async def _handle_message(self, data: Dict[str, Any]):
cmd = data.get("command")
if cmd == "start_stream":
self._start_ffmpeg()
return
if cmd == "stop_stream":
self._stop_ffmpeg()
return
if data.get("type") == "offer":
offer_sdp = data.get("sdp", "")
camera_id = data.get("cameraId", "camera-01")
answer_sdp = await self._handle_webrtc_offer(offer_sdp)
if self._ws:
answer_payload = {
"type": "answer",
"sdp": answer_sdp,
"cameraId": camera_id,
"hostId": self.host_id,
}
await self._ws.send(json.dumps(answer_payload))
# ---------------------------------------------------------------------
# FFmpeg 推流V4L2 USB 摄像头)
# ---------------------------------------------------------------------
def _start_ffmpeg(self):
if self._ffmpeg_process and self._ffmpeg_process.poll() is None:
return
# 兼容性优先:不强制输入像素格式;失败再通过外部调整 width/height/fps
video_size = f"{self.width}x{self.height}"
cmd = [
"ffmpeg",
"-hide_banner",
"-loglevel",
"warning",
# video input
"-f", "v4l2",
"-framerate", str(self.fps),
"-video_size", video_size,
"-i", self.video_device,
]
# optional audio input
if self.audio_device:
cmd += [
"-f", "alsa",
"-i", self.audio_device,
"-c:a", "aac",
"-b:a", self.audio_bitrate,
"-ar", "44100",
"-ac", "1",
]
else:
cmd += ["-an"]
# video encode + rtmp out
cmd += [
"-c:v", "libx264",
"-preset", "ultrafast",
"-tune", "zerolatency",
"-profile:v", "baseline",
"-pix_fmt", "yuv420p",
"-b:v", self.video_bitrate,
"-maxrate", self.video_bitrate,
"-bufsize", "2M",
"-g", str(max(self.fps, 10)),
"-keyint_min", str(max(self.fps, 10)),
"-sc_threshold", "0",
"-x264-params", "bframes=0",
"-f", "flv",
self.rtmp_url,
]
print(f"[CameraController] starting FFmpeg: {' '.join(cmd)}", file=sys.stderr)
try:
# 不再丢弃日志,至少能看到 ffmpeg 报错(调试很关键)
self._ffmpeg_process = subprocess.Popen(
cmd,
stdout=subprocess.DEVNULL,
stderr=sys.stderr,
shell=False,
)
except Exception as e:
self._ffmpeg_process = None
print(f"[CameraController] failed to start FFmpeg: {e}", file=sys.stderr)
def _stop_ffmpeg(self):
proc = self._ffmpeg_process
if proc and proc.poll() is None:
try:
proc.terminate()
try:
proc.wait(timeout=5)
except subprocess.TimeoutExpired:
proc.kill()
except Exception as e:
print(f"[CameraController] error stopping FFmpeg: {e}", file=sys.stderr)
self._ffmpeg_process = None
# ---------------------------------------------------------------------
# WebRTC offer -> SRS
# ---------------------------------------------------------------------
async def _handle_webrtc_offer(self, offer_sdp: str) -> str:
payload = {
"api": self.webrtc_api,
"streamurl": self.webrtc_stream_url,
"sdp": offer_sdp,
}
headers = {"Content-Type": "application/json"}
def _do_post():
return requests.post(self.webrtc_api, json=payload, headers=headers, timeout=10)
loop = asyncio.get_running_loop()
resp = await loop.run_in_executor(None, _do_post)
resp.raise_for_status()
data = resp.json()
answer_sdp = data.get("sdp", "")
if not answer_sdp:
raise RuntimeError(f"empty SDP from media server: {data}")
return answer_sdp
if __name__ == "__main__":
# 直接运行用于手动测试
c = CameraController(
host_id="demo-host",
video_device="/dev/video0",
width=1280,
height=720,
fps=30,
video_bitrate="1500k",
audio_device=None,
)
try:
while True:
asyncio.sleep(1)
except KeyboardInterrupt:
c.stop()

View File

@@ -0,0 +1,51 @@
#!/usr/bin/env python3
import time
import json
from cameraUSB import CameraController
def main():
# 按你的实际情况改
cfg = dict(
host_id="demo-host",
signal_backend_url="wss://sciol.ac.cn/api/realtime/signal/host",
rtmp_url="rtmp://srs.sciol.ac.cn:4499/live/camera-01",
webrtc_api="https://srs.sciol.ac.cn/rtc/v1/play/",
webrtc_stream_url="webrtc://srs.sciol.ac.cn:4500/live/camera-01",
video_device="/dev/video7",
width=1280,
height=720,
fps=30,
video_bitrate="1500k",
audio_device=None,
)
c = CameraController(**cfg)
# 可选:如果你不想依赖 __init__ 自动 start可以这样显式调用
# c = CameraController(host_id=cfg["host_id"])
# c.start(cfg)
run_seconds = 30 # 测试运行时长
t0 = time.time()
try:
while True:
st = c.get_status()
print(json.dumps(st, ensure_ascii=False, indent=2))
if time.time() - t0 >= run_seconds:
break
time.sleep(2)
except KeyboardInterrupt:
print("Interrupted, stopping...")
finally:
print("Stopping controller...")
c.stop()
print("Done.")
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,36 @@
import cv2
# 推荐把 @ 进行 URL 编码:@ -> %40
RTSP_URL = "rtsp://admin:admin123@192.168.31.164:554/stream1"
OUTPUT_IMAGE = "rtsp_test_frame.jpg"
def main():
print(f"尝试连接 RTSP 流: {RTSP_URL}")
cap = cv2.VideoCapture(RTSP_URL)
if not cap.isOpened():
print("错误:无法打开 RTSP 流,请检查:")
print(" 1. IP/端口是否正确")
print(" 2. 账号密码(尤其是 @ 是否已转成 %40是否正确")
print(" 3. 摄像头是否允许当前主机访问(同一网段、防火墙等)")
return
print("连接成功,开始读取一帧...")
ret, frame = cap.read()
if not ret or frame is None:
print("错误:已连接但未能读取到帧数据(可能是码流未开启或网络抖动)")
cap.release()
return
# 保存当前帧
success = cv2.imwrite(OUTPUT_IMAGE, frame)
cap.release()
if success:
print(f"成功截取一帧并保存为: {OUTPUT_IMAGE}")
else:
print("错误:写入图片失败,请检查磁盘权限/路径")
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,21 @@
# run_camera_push.py
import time
from cameraDriver import CameraController # 这里根据你的文件名调整
if __name__ == "__main__":
controller = CameraController(
host_id="demo-host",
signal_backend_url="wss://sciol.ac.cn/api/realtime/signal/host",
rtmp_url="rtmp://srs.sciol.ac.cn:4499/live/camera-01",
webrtc_api="https://srs.sciol.ac.cn/rtc/v1/play/",
webrtc_stream_url="webrtc://srs.sciol.ac.cn:4500/live/camera-01",
camera_rtsp_url="rtsp://admin:admin123@192.168.31.164:554/stream1",
)
try:
while True:
status = controller.get_status()
print(status)
time.sleep(5)
except KeyboardInterrupt:
controller.stop()

View File

@@ -0,0 +1,78 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
使用 CameraController 来测试 PTZ
让摄像头按顺序向下、向上、向左、向右运动几次。
"""
import time
import sys
# 根据你的工程结构修改导入路径:
# 假设 CameraController 定义在 cameraController.py 里
from cameraDriver import CameraController
def main():
# === 根据你的实际情况填 IP、端口、账号密码 ===
ptz_host = "192.168.31.164"
ptz_port = 2020 # 注意要和你单独测试 PTZController 时保持一致
ptz_user = "admin"
ptz_password = "admin123"
# 1. 创建 CameraController 实例
cam = CameraController(
# 其他摄像机相关参数按你类的 __init__ 来补充
ptz_host=ptz_host,
ptz_port=ptz_port,
ptz_user=ptz_user,
ptz_password=ptz_password,
)
# 2. 启动 / 初始化(如果你的 CameraController 有 start(config) 之类的接口)
# 这里给一个最小的 config重点是 PTZ 相关字段
config = {
"ptz_host": ptz_host,
"ptz_port": ptz_port,
"ptz_user": ptz_user,
"ptz_password": ptz_password,
}
try:
cam.start(config)
except Exception as e:
print(f"[TEST] CameraController start() 失败: {e}", file=sys.stderr)
return
# 这里可以判断一下内部 _ptz 是否初始化成功(如果你对 CameraController 做了封装)
if getattr(cam, "_ptz", None) is None:
print("[TEST] CameraController 内部 PTZ 未初始化成功,请检查 ptz_host/port/user/password 配置。", file=sys.stderr)
return
# 3. 依次调用 CameraController 的 PTZ 方法
# 这里假设你在 CameraController 中提供了这几个对外方法:
# ptz_move_down / ptz_move_up / ptz_move_left / ptz_move_right
# 如果你命名不一样,把下面调用名改成你的即可。
print("向下移动(通过 CameraController...")
cam.ptz_move_down(speed=0.5, duration=1.0)
time.sleep(1)
print("向上移动(通过 CameraController...")
cam.ptz_move_up(speed=0.5, duration=1.0)
time.sleep(1)
print("向左移动(通过 CameraController...")
cam.ptz_move_left(speed=0.5, duration=1.0)
time.sleep(1)
print("向右移动(通过 CameraController...")
cam.ptz_move_right(speed=0.5, duration=1.0)
time.sleep(1)
print("测试结束。")
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,50 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
测试 cameraDriver.py中的 PTZController 类,让摄像头按顺序运动几次
"""
import time
from cameraDriver import PTZController
def main():
# 根据你的实际情况填 IP、端口、账号密码
host = "192.168.31.164"
port = 80
user = "admin"
password = "admin123"
ptz = PTZController(host=host, port=port, user=user, password=password)
# 1. 连接摄像头
if not ptz.connect():
print("连接 PTZ 失败,检查 IP/用户名/密码/端口。")
return
# 2. 依次测试几个动作
# 每个动作之间 sleep 一下方便观察
print("向下移动...")
ptz.move_down(speed=0.5, duration=1.0)
time.sleep(1)
print("向上移动...")
ptz.move_up(speed=0.5, duration=1.0)
time.sleep(1)
print("向左移动...")
ptz.move_left(speed=0.5, duration=1.0)
time.sleep(1)
print("向右移动...")
ptz.move_right(speed=0.5, duration=1.0)
time.sleep(1)
print("测试结束。")
if __name__ == "__main__":
main()

View File

@@ -147,6 +147,9 @@ class LiquidHandlerMiddleware(LiquidHandler):
offsets: Optional[List[Coordinate]] = None,
**backend_kwargs,
):
# 如果 use_channels 为 None使用默认值所有通道
if use_channels is None:
use_channels = list(range(self.channel_num))
if not offsets or (isinstance(offsets, list) and len(offsets) != len(use_channels)):
offsets = [Coordinate.zero()] * len(use_channels)
if self._simulator:
@@ -759,7 +762,7 @@ class LiquidHandlerAbstract(LiquidHandlerMiddleware):
blow_out_air_volume=current_dis_blow_out_air_volume,
spread=spread,
)
if delays is not None:
if delays is not None and len(delays) > 1:
await self.custom_delay(seconds=delays[1])
await self.touch_tip(current_targets)
await self.discard_tips()
@@ -833,8 +836,10 @@ class LiquidHandlerAbstract(LiquidHandlerMiddleware):
spread=spread,
)
if delays is not None:
if delays is not None and len(delays) > 1:
await self.custom_delay(seconds=delays[1])
# 只有在 mix_time 有效时才调用 mix
if mix_time is not None and mix_time > 0:
await self.mix(
targets=[targets[_]],
mix_time=mix_time,
@@ -843,7 +848,7 @@ class LiquidHandlerAbstract(LiquidHandlerMiddleware):
height_to_bottom=mix_liquid_height if mix_liquid_height else None,
mix_rate=mix_rate if mix_rate else None,
)
if delays is not None:
if delays is not None and len(delays) > 1:
await self.custom_delay(seconds=delays[1])
await self.touch_tip(targets[_])
await self.discard_tips()
@@ -893,9 +898,11 @@ class LiquidHandlerAbstract(LiquidHandlerMiddleware):
blow_out_air_volume=current_dis_blow_out_air_volume,
spread=spread,
)
if delays is not None:
if delays is not None and len(delays) > 1:
await self.custom_delay(seconds=delays[1])
# 只有在 mix_time 有效时才调用 mix
if mix_time is not None and mix_time > 0:
await self.mix(
targets=current_targets,
mix_time=mix_time,
@@ -904,7 +911,7 @@ class LiquidHandlerAbstract(LiquidHandlerMiddleware):
height_to_bottom=mix_liquid_height if mix_liquid_height else None,
mix_rate=mix_rate if mix_rate else None,
)
if delays is not None:
if delays is not None and len(delays) > 1:
await self.custom_delay(seconds=delays[1])
await self.touch_tip(current_targets)
await self.discard_tips()
@@ -942,29 +949,126 @@ class LiquidHandlerAbstract(LiquidHandlerMiddleware):
delays: Optional[List[int]] = None,
none_keys: List[str] = [],
):
"""Transfer liquid from each *source* well/plate to the corresponding *target*.
"""Transfer liquid with automatic mode detection.
Supports three transfer modes:
1. One-to-many (1 source -> N targets): Distribute from one source to multiple targets
2. One-to-one (N sources -> N targets): Standard transfer, each source to corresponding target
3. Many-to-one (N sources -> 1 target): Combine multiple sources into one target
Parameters
----------
asp_vols, dis_vols
Single volume (µL) or list matching the number of transfers.
Single volume (µL) or list. Automatically expanded based on transfer mode.
sources, targets
Samelength sequences of containers (wells or plates). In 96well mode
each must contain exactly one plate.
Containers (wells or plates). Length determines transfer mode:
- len(sources) == 1, len(targets) > 1: One-to-many mode
- len(sources) == len(targets): One-to-one mode
- len(sources) > 1, len(targets) == 1: Many-to-one mode
tip_racks
One or more TipRacks providing fresh tips.
is_96_well
Set *True* to use the 96channel head.
"""
# 确保 use_channels 有默认值
if use_channels is None:
use_channels = [0] if self.channel_num >= 1 else list(range(self.channel_num))
if is_96_well:
pass # This mode is not verified.
else:
# 转换体积参数为列表
if isinstance(asp_vols, (int, float)):
asp_vols = [float(asp_vols)]
else:
asp_vols = [float(v) for v in asp_vols]
if isinstance(dis_vols, (int, float)):
dis_vols = [float(dis_vols)]
else:
dis_vols = [float(v) for v in dis_vols]
# 统一混合次数为标量,防止数组/列表与 int 比较时报错
if mix_times is not None and not isinstance(mix_times, (int, float)):
try:
mix_times = mix_times[0] if len(mix_times) > 0 else None
except Exception:
try:
mix_times = next(iter(mix_times))
except Exception:
pass
if mix_times is not None:
mix_times = int(mix_times)
# 识别传输模式
num_sources = len(sources)
num_targets = len(targets)
if num_sources == 1 and num_targets > 1:
# 模式1: 一对多 (1 source -> N targets)
await self._transfer_one_to_many(
sources[0], targets, tip_racks, use_channels,
asp_vols, dis_vols, asp_flow_rates, dis_flow_rates,
offsets, touch_tip, liquid_height, blow_out_air_volume,
spread, mix_stage, mix_times, mix_vol, mix_rate,
mix_liquid_height, delays
)
elif num_sources > 1 and num_targets == 1:
# 模式2: 多对一 (N sources -> 1 target)
await self._transfer_many_to_one(
sources, targets[0], tip_racks, use_channels,
asp_vols, dis_vols, asp_flow_rates, dis_flow_rates,
offsets, touch_tip, liquid_height, blow_out_air_volume,
spread, mix_stage, mix_times, mix_vol, mix_rate,
mix_liquid_height, delays
)
elif num_sources == num_targets:
# 模式3: 一对一 (N sources -> N targets) - 原有逻辑
await self._transfer_one_to_one(
sources, targets, tip_racks, use_channels,
asp_vols, dis_vols, asp_flow_rates, dis_flow_rates,
offsets, touch_tip, liquid_height, blow_out_air_volume,
spread, mix_stage, mix_times, mix_vol, mix_rate,
mix_liquid_height, delays
)
else:
raise ValueError(
f"Unsupported transfer mode: {num_sources} sources -> {num_targets} targets. "
"Supported modes: 1->N, N->1, or N->N."
)
async def _transfer_one_to_one(
self,
sources: Sequence[Container],
targets: Sequence[Container],
tip_racks: Sequence[TipRack],
use_channels: List[int],
asp_vols: List[float],
dis_vols: List[float],
asp_flow_rates: Optional[List[Optional[float]]],
dis_flow_rates: Optional[List[Optional[float]]],
offsets: Optional[List[Coordinate]],
touch_tip: bool,
liquid_height: Optional[List[Optional[float]]],
blow_out_air_volume: Optional[List[Optional[float]]],
spread: Literal["wide", "tight", "custom"],
mix_stage: Optional[Literal["none", "before", "after", "both"]],
mix_times: Optional[int],
mix_vol: Optional[int],
mix_rate: Optional[int],
mix_liquid_height: Optional[float],
delays: Optional[List[int]],
):
"""一对一传输模式N sources -> N targets"""
# 验证参数长度
if len(asp_vols) != len(targets):
raise ValueError(f"Length of `asp_vols` {len(asp_vols)} must match `targets` {len(targets)}.")
if len(dis_vols) != len(targets):
raise ValueError(f"Length of `dis_vols` {len(dis_vols)} must match `targets` {len(targets)}.")
if len(sources) != len(targets):
raise ValueError(f"Length of `sources` {len(sources)} must match `targets` {len(targets)}.")
# 首先应该对任务分组然后每次1个/8个进行操作处理
if len(use_channels) == 1:
for _ in range(len(targets)):
tip = []
@@ -976,10 +1080,10 @@ class LiquidHandlerAbstract(LiquidHandlerMiddleware):
resources=[sources[_]],
vols=[asp_vols[_]],
use_channels=use_channels,
flow_rates=[asp_flow_rates[0]] if asp_flow_rates else None,
offsets=[offsets[0]] if offsets else None,
liquid_height=[liquid_height[0]] if liquid_height else None,
blow_out_air_volume=[blow_out_air_volume[0]] if blow_out_air_volume else None,
flow_rates=[asp_flow_rates[_]] if asp_flow_rates and len(asp_flow_rates) > _ else None,
offsets=[offsets[_]] if offsets and len(offsets) > _ else None,
liquid_height=[liquid_height[_]] if liquid_height and len(liquid_height) > _ else None,
blow_out_air_volume=[blow_out_air_volume[_]] if blow_out_air_volume and len(blow_out_air_volume) > _ else None,
spread=spread,
)
if delays is not None:
@@ -988,14 +1092,15 @@ class LiquidHandlerAbstract(LiquidHandlerMiddleware):
resources=[targets[_]],
vols=[dis_vols[_]],
use_channels=use_channels,
flow_rates=[dis_flow_rates[1]] if dis_flow_rates else None,
offsets=[offsets[1]] if offsets else None,
blow_out_air_volume=[blow_out_air_volume[1]] if blow_out_air_volume else None,
liquid_height=[liquid_height[1]] if liquid_height else None,
flow_rates=[dis_flow_rates[_]] if dis_flow_rates and len(dis_flow_rates) > _ else None,
offsets=[offsets[_]] if offsets and len(offsets) > _ else None,
blow_out_air_volume=[blow_out_air_volume[_]] if blow_out_air_volume and len(blow_out_air_volume) > _ else None,
liquid_height=[liquid_height[_]] if liquid_height and len(liquid_height) > _ else None,
spread=spread,
)
if delays is not None:
if delays is not None and len(delays) > 1:
await self.custom_delay(seconds=delays[1])
if mix_stage in ["after", "both"] and mix_times is not None and mix_times > 0:
await self.mix(
targets=[targets[_]],
mix_time=mix_times,
@@ -1004,20 +1109,16 @@ class LiquidHandlerAbstract(LiquidHandlerMiddleware):
height_to_bottom=mix_liquid_height if mix_liquid_height else None,
mix_rate=mix_rate if mix_rate else None,
)
if delays is not None:
if delays is not None and len(delays) > 1:
await self.custom_delay(seconds=delays[1])
await self.touch_tip(targets[_])
await self.discard_tips()
await self.discard_tips(use_channels=use_channels)
elif len(use_channels) == 8:
# 对于8个的情况需要判断此时任务是不是能被8通道移液站来成功处理
if len(targets) % 8 != 0:
raise ValueError(f"Length of `targets` {len(targets)} must be a multiple of 8 for 8-channel mode.")
# 8个8个来取任务序列
for i in range(0, len(targets), 8):
# 取出8个任务
tip = []
for _ in range(len(use_channels)):
tip.extend(next(self.current_tip))
@@ -1026,14 +1127,14 @@ class LiquidHandlerAbstract(LiquidHandlerMiddleware):
current_reagent_sources = sources[i:i + 8]
current_asp_vols = asp_vols[i:i + 8]
current_dis_vols = dis_vols[i:i + 8]
current_asp_flow_rates = asp_flow_rates[i:i + 8]
current_asp_flow_rates = asp_flow_rates[i:i + 8] if asp_flow_rates else None
current_asp_offset = offsets[i:i + 8] if offsets else [None] * 8
current_dis_offset = offsets[-i*8-8:len(offsets)-i*8] if offsets else [None] * 8
current_dis_offset = offsets[i:i + 8] if offsets else [None] * 8
current_asp_liquid_height = liquid_height[i:i + 8] if liquid_height else [None] * 8
current_dis_liquid_height = liquid_height[-i*8-8:len(liquid_height)-i*8] if liquid_height else [None] * 8
current_dis_liquid_height = liquid_height[i:i + 8] if liquid_height else [None] * 8
current_asp_blow_out_air_volume = blow_out_air_volume[i:i + 8] if blow_out_air_volume else [None] * 8
current_dis_blow_out_air_volume = blow_out_air_volume[-i*8-8:len(blow_out_air_volume)-i*8] if blow_out_air_volume else [None] * 8
current_dis_flow_rates = dis_flow_rates[i:i + 8] if dis_flow_rates else [None] * 8
current_dis_blow_out_air_volume = blow_out_air_volume[i:i + 8] if blow_out_air_volume else [None] * 8
current_dis_flow_rates = dis_flow_rates[i:i + 8] if dis_flow_rates else None
await self.aspirate(
resources=current_reagent_sources,
@@ -1058,9 +1159,10 @@ class LiquidHandlerAbstract(LiquidHandlerMiddleware):
liquid_height=current_dis_liquid_height,
spread=spread,
)
if delays is not None:
if delays is not None and len(delays) > 1:
await self.custom_delay(seconds=delays[1])
if mix_stage in ["after", "both"] and mix_times is not None and mix_times > 0:
await self.mix(
targets=current_targets,
mix_time=mix_times,
@@ -1069,11 +1171,364 @@ class LiquidHandlerAbstract(LiquidHandlerMiddleware):
height_to_bottom=mix_liquid_height if mix_liquid_height else None,
mix_rate=mix_rate if mix_rate else None,
)
if delays is not None:
if delays is not None and len(delays) > 1:
await self.custom_delay(seconds=delays[1])
await self.touch_tip(current_targets)
await self.discard_tips([0,1,2,3,4,5,6,7])
async def _transfer_one_to_many(
self,
source: Container,
targets: Sequence[Container],
tip_racks: Sequence[TipRack],
use_channels: List[int],
asp_vols: List[float],
dis_vols: List[float],
asp_flow_rates: Optional[List[Optional[float]]],
dis_flow_rates: Optional[List[Optional[float]]],
offsets: Optional[List[Coordinate]],
touch_tip: bool,
liquid_height: Optional[List[Optional[float]]],
blow_out_air_volume: Optional[List[Optional[float]]],
spread: Literal["wide", "tight", "custom"],
mix_stage: Optional[Literal["none", "before", "after", "both"]],
mix_times: Optional[int],
mix_vol: Optional[int],
mix_rate: Optional[int],
mix_liquid_height: Optional[float],
delays: Optional[List[int]],
):
"""一对多传输模式1 source -> N targets"""
# 验证和扩展体积参数
if len(asp_vols) == 1:
# 如果只提供一个吸液体积,计算总吸液体积(所有分液体积之和)
total_asp_vol = sum(dis_vols)
asp_vol = asp_vols[0] if asp_vols[0] >= total_asp_vol else total_asp_vol
else:
raise ValueError("For one-to-many mode, `asp_vols` should be a single value or list with one element.")
if len(dis_vols) != len(targets):
raise ValueError(f"Length of `dis_vols` {len(dis_vols)} must match `targets` {len(targets)}.")
if len(use_channels) == 1:
# 单通道模式:一次吸液,多次分液
tip = []
for _ in range(len(use_channels)):
tip.extend(next(self.current_tip))
await self.pick_up_tips(tip)
# 从源容器吸液(总体积)
await self.aspirate(
resources=[source],
vols=[asp_vol],
use_channels=use_channels,
flow_rates=[asp_flow_rates[0]] if asp_flow_rates and len(asp_flow_rates) > 0 else None,
offsets=[offsets[0]] if offsets and len(offsets) > 0 else None,
liquid_height=[liquid_height[0]] if liquid_height and len(liquid_height) > 0 else None,
blow_out_air_volume=[blow_out_air_volume[0]] if blow_out_air_volume and len(blow_out_air_volume) > 0 else None,
spread=spread,
)
if delays is not None:
await self.custom_delay(seconds=delays[0])
# 分多次分液到不同的目标容器
for idx, target in enumerate(targets):
await self.dispense(
resources=[target],
vols=[dis_vols[idx]],
use_channels=use_channels,
flow_rates=[dis_flow_rates[idx]] if dis_flow_rates and len(dis_flow_rates) > idx else None,
offsets=[offsets[idx]] if offsets and len(offsets) > idx else None,
blow_out_air_volume=[blow_out_air_volume[idx]] if blow_out_air_volume and len(blow_out_air_volume) > idx else None,
liquid_height=[liquid_height[idx]] if liquid_height and len(liquid_height) > idx else None,
spread=spread,
)
if delays is not None and len(delays) > 1:
await self.custom_delay(seconds=delays[1])
if mix_stage in ["after", "both"] and mix_times is not None and mix_times > 0:
await self.mix(
targets=[target],
mix_time=mix_times,
mix_vol=mix_vol,
offsets=offsets[idx:idx+1] if offsets else None,
height_to_bottom=mix_liquid_height if mix_liquid_height else None,
mix_rate=mix_rate if mix_rate else None,
)
if touch_tip:
await self.touch_tip([target])
await self.discard_tips(use_channels=use_channels)
elif len(use_channels) == 8:
# 8通道模式需要确保目标数量是8的倍数
if len(targets) % 8 != 0:
raise ValueError(f"For 8-channel mode, number of targets {len(targets)} must be a multiple of 8.")
# 每次处理8个目标
for i in range(0, len(targets), 8):
tip = []
for _ in range(len(use_channels)):
tip.extend(next(self.current_tip))
await self.pick_up_tips(tip)
current_targets = targets[i:i + 8]
current_dis_vols = dis_vols[i:i + 8]
# 8个通道都从同一个源容器吸液每个通道的吸液体积等于对应的分液体积
current_asp_flow_rates = asp_flow_rates[0:1] * 8 if asp_flow_rates and len(asp_flow_rates) > 0 else None
current_asp_offset = offsets[0:1] * 8 if offsets and len(offsets) > 0 else [None] * 8
current_asp_liquid_height = liquid_height[0:1] * 8 if liquid_height and len(liquid_height) > 0 else [None] * 8
current_asp_blow_out_air_volume = blow_out_air_volume[0:1] * 8 if blow_out_air_volume and len(blow_out_air_volume) > 0 else [None] * 8
# 从源容器吸液8个通道都从同一个源但每个通道的吸液体积不同
await self.aspirate(
resources=[source] * 8, # 8个通道都从同一个源
vols=current_dis_vols, # 每个通道的吸液体积等于对应的分液体积
use_channels=use_channels,
flow_rates=current_asp_flow_rates,
offsets=current_asp_offset,
liquid_height=current_asp_liquid_height,
blow_out_air_volume=current_asp_blow_out_air_volume,
spread=spread,
)
if delays is not None:
await self.custom_delay(seconds=delays[0])
# 分液到8个目标
current_dis_flow_rates = dis_flow_rates[i:i + 8] if dis_flow_rates else None
current_dis_offset = offsets[i:i + 8] if offsets else [None] * 8
current_dis_liquid_height = liquid_height[i:i + 8] if liquid_height else [None] * 8
current_dis_blow_out_air_volume = blow_out_air_volume[i:i + 8] if blow_out_air_volume else [None] * 8
await self.dispense(
resources=current_targets,
vols=current_dis_vols,
use_channels=use_channels,
flow_rates=current_dis_flow_rates,
offsets=current_dis_offset,
blow_out_air_volume=current_dis_blow_out_air_volume,
liquid_height=current_dis_liquid_height,
spread=spread,
)
if delays is not None and len(delays) > 1:
await self.custom_delay(seconds=delays[1])
if mix_stage in ["after", "both"] and mix_times is not None and mix_times > 0:
await self.mix(
targets=current_targets,
mix_time=mix_times,
mix_vol=mix_vol,
offsets=offsets if offsets else None,
height_to_bottom=mix_liquid_height if mix_liquid_height else None,
mix_rate=mix_rate if mix_rate else None,
)
if touch_tip:
await self.touch_tip(current_targets)
await self.discard_tips([0,1,2,3,4,5,6,7])
async def _transfer_many_to_one(
self,
sources: Sequence[Container],
target: Container,
tip_racks: Sequence[TipRack],
use_channels: List[int],
asp_vols: List[float],
dis_vols: List[float],
asp_flow_rates: Optional[List[Optional[float]]],
dis_flow_rates: Optional[List[Optional[float]]],
offsets: Optional[List[Coordinate]],
touch_tip: bool,
liquid_height: Optional[List[Optional[float]]],
blow_out_air_volume: Optional[List[Optional[float]]],
spread: Literal["wide", "tight", "custom"],
mix_stage: Optional[Literal["none", "before", "after", "both"]],
mix_times: Optional[int],
mix_vol: Optional[int],
mix_rate: Optional[int],
mix_liquid_height: Optional[float],
delays: Optional[List[int]],
):
"""多对一传输模式N sources -> 1 target汇总/混合)"""
# 验证和扩展体积参数
if len(asp_vols) != len(sources):
raise ValueError(f"Length of `asp_vols` {len(asp_vols)} must match `sources` {len(sources)}.")
# 支持两种模式:
# 1. dis_vols 为单个值:所有源汇总,使用总吸液体积或指定分液体积
# 2. dis_vols 长度等于 asp_vols每个源按不同比例分液按比例混合
if len(dis_vols) == 1:
# 模式1使用单个分液体积
total_dis_vol = sum(asp_vols)
dis_vol = dis_vols[0] if dis_vols[0] >= total_dis_vol else total_dis_vol
use_proportional_mixing = False
elif len(dis_vols) == len(asp_vols):
# 模式2按不同比例混合
use_proportional_mixing = True
else:
raise ValueError(
f"For many-to-one mode, `dis_vols` should be a single value or list with length {len(asp_vols)} "
f"(matching `asp_vols`). Got length {len(dis_vols)}."
)
if len(use_channels) == 1:
# 单通道模式:多次吸液,一次分液
# 先混合前(如果需要)
if mix_stage in ["before", "both"] and mix_times is not None and mix_times > 0:
# 注意:在吸液前混合源容器通常不常见,这里跳过
pass
# 从每个源容器吸液并分液到目标容器
for idx, source in enumerate(sources):
tip = []
for _ in range(len(use_channels)):
tip.extend(next(self.current_tip))
await self.pick_up_tips(tip)
await self.aspirate(
resources=[source],
vols=[asp_vols[idx]],
use_channels=use_channels,
flow_rates=[asp_flow_rates[idx]] if asp_flow_rates and len(asp_flow_rates) > idx else None,
offsets=[offsets[idx]] if offsets and len(offsets) > idx else None,
liquid_height=[liquid_height[idx]] if liquid_height and len(liquid_height) > idx else None,
blow_out_air_volume=[blow_out_air_volume[idx]] if blow_out_air_volume and len(blow_out_air_volume) > idx else None,
spread=spread,
)
if delays is not None:
await self.custom_delay(seconds=delays[0])
# 分液到目标容器
if use_proportional_mixing:
# 按不同比例混合:使用对应的 dis_vols
dis_vol = dis_vols[idx]
dis_flow_rate = dis_flow_rates[idx] if dis_flow_rates and len(dis_flow_rates) > idx else None
dis_offset = offsets[idx] if offsets and len(offsets) > idx else None
dis_liquid_height = liquid_height[idx] if liquid_height and len(liquid_height) > idx else None
dis_blow_out = blow_out_air_volume[idx] if blow_out_air_volume and len(blow_out_air_volume) > idx else None
else:
# 标准模式:分液体积等于吸液体积
dis_vol = asp_vols[idx]
dis_flow_rate = dis_flow_rates[0] if dis_flow_rates and len(dis_flow_rates) > 0 else None
dis_offset = offsets[0] if offsets and len(offsets) > 0 else None
dis_liquid_height = liquid_height[0] if liquid_height and len(liquid_height) > 0 else None
dis_blow_out = blow_out_air_volume[0] if blow_out_air_volume and len(blow_out_air_volume) > 0 else None
await self.dispense(
resources=[target],
vols=[dis_vol],
use_channels=use_channels,
flow_rates=[dis_flow_rate] if dis_flow_rate is not None else None,
offsets=[dis_offset] if dis_offset is not None else None,
blow_out_air_volume=[dis_blow_out] if dis_blow_out is not None else None,
liquid_height=[dis_liquid_height] if dis_liquid_height is not None else None,
spread=spread,
)
if delays is not None and len(delays) > 1:
await self.custom_delay(seconds=delays[1])
await self.discard_tips(use_channels=use_channels)
# 最后在目标容器中混合(如果需要)
if mix_stage in ["after", "both"] and mix_times is not None and mix_times > 0:
await self.mix(
targets=[target],
mix_time=mix_times,
mix_vol=mix_vol,
offsets=offsets[0:1] if offsets else None,
height_to_bottom=mix_liquid_height if mix_liquid_height else None,
mix_rate=mix_rate if mix_rate else None,
)
if touch_tip:
await self.touch_tip([target])
elif len(use_channels) == 8:
# 8通道模式需要确保源数量是8的倍数
if len(sources) % 8 != 0:
raise ValueError(f"For 8-channel mode, number of sources {len(sources)} must be a multiple of 8.")
# 每次处理8个源
for i in range(0, len(sources), 8):
tip = []
for _ in range(len(use_channels)):
tip.extend(next(self.current_tip))
await self.pick_up_tips(tip)
current_sources = sources[i:i + 8]
current_asp_vols = asp_vols[i:i + 8]
current_asp_flow_rates = asp_flow_rates[i:i + 8] if asp_flow_rates else None
current_asp_offset = offsets[i:i + 8] if offsets else [None] * 8
current_asp_liquid_height = liquid_height[i:i + 8] if liquid_height else [None] * 8
current_asp_blow_out_air_volume = blow_out_air_volume[i:i + 8] if blow_out_air_volume else [None] * 8
# 从8个源容器吸液
await self.aspirate(
resources=current_sources,
vols=current_asp_vols,
use_channels=use_channels,
flow_rates=current_asp_flow_rates,
offsets=current_asp_offset,
blow_out_air_volume=current_asp_blow_out_air_volume,
liquid_height=current_asp_liquid_height,
spread=spread,
)
if delays is not None:
await self.custom_delay(seconds=delays[0])
# 分液到目标容器(每个通道分液到同一个目标)
if use_proportional_mixing:
# 按比例混合:使用对应的 dis_vols
current_dis_vols = dis_vols[i:i + 8]
current_dis_flow_rates = dis_flow_rates[i:i + 8] if dis_flow_rates else None
current_dis_offset = offsets[i:i + 8] if offsets else [None] * 8
current_dis_liquid_height = liquid_height[i:i + 8] if liquid_height else [None] * 8
current_dis_blow_out_air_volume = blow_out_air_volume[i:i + 8] if blow_out_air_volume else [None] * 8
else:
# 标准模式:每个通道分液体积等于其吸液体积
current_dis_vols = current_asp_vols
current_dis_flow_rates = dis_flow_rates[0:1] * 8 if dis_flow_rates else None
current_dis_offset = offsets[0:1] * 8 if offsets else [None] * 8
current_dis_liquid_height = liquid_height[0:1] * 8 if liquid_height else [None] * 8
current_dis_blow_out_air_volume = blow_out_air_volume[0:1] * 8 if blow_out_air_volume else [None] * 8
await self.dispense(
resources=[target] * 8, # 8个通道都分到同一个目标
vols=current_dis_vols,
use_channels=use_channels,
flow_rates=current_dis_flow_rates,
offsets=current_dis_offset,
blow_out_air_volume=current_dis_blow_out_air_volume,
liquid_height=current_dis_liquid_height,
spread=spread,
)
if delays is not None and len(delays) > 1:
await self.custom_delay(seconds=delays[1])
await self.discard_tips([0,1,2,3,4,5,6,7])
# 最后在目标容器中混合(如果需要)
if mix_stage in ["after", "both"] and mix_times is not None and mix_times > 0:
await self.mix(
targets=[target],
mix_time=mix_times,
mix_vol=mix_vol,
offsets=offsets[0:1] if offsets else None,
height_to_bottom=mix_liquid_height if mix_liquid_height else None,
mix_rate=mix_rate if mix_rate else None,
)
if touch_tip:
await self.touch_tip([target])
# except Exception as e:
# traceback.print_exc()
# raise RuntimeError(f"Liquid addition failed: {e}") from e

View File

@@ -5,6 +5,7 @@ import json
import os
import socket
import time
import uuid
from typing import Any, List, Dict, Optional, Tuple, TypedDict, Union, Sequence, Iterator, Literal
from pylabrobot.liquid_handling import (
@@ -856,7 +857,30 @@ class PRCXI9300Api:
def _raw_request(self, payload: str) -> str:
if self.debug:
return " "
# 调试/仿真模式下直接返回可解析的模拟 JSON避免后续 json.loads 报错
try:
req = json.loads(payload)
method = req.get("MethodName")
except Exception:
method = None
data: Any = True
if method in {"AddSolution"}:
data = str(uuid.uuid4())
elif method in {"AddWorkTabletMatrix", "AddWorkTabletMatrix2"}:
data = {"Success": True, "Message": "debug mock"}
elif method in {"GetErrorCode"}:
data = ""
elif method in {"RemoveErrorCodet", "Reset", "Start", "LoadSolution", "Pause", "Resume", "Stop"}:
data = True
elif method in {"GetStepStateList", "GetStepStatus", "GetStepState"}:
data = []
elif method in {"GetLocation"}:
data = {"X": 0, "Y": 0, "Z": 0}
elif method in {"GetResetStatus"}:
data = False
return json.dumps({"Success": True, "Msg": "debug mock", "Data": data})
with contextlib.closing(socket.socket()) as sock:
sock.settimeout(self.timeout)
sock.connect((self.host, self.port))

View File

@@ -7,7 +7,7 @@ class VirtualMultiwayValve:
"""
虚拟九通阀门 - 0号位连接transfer pump1-8号位连接其他设备 🔄
"""
def __init__(self, port: str = "VIRTUAL", positions: int = 8):
def __init__(self, port: str = "VIRTUAL", positions: int = 8, **kwargs):
self.port = port
self.max_positions = positions # 1-8号位
self.total_positions = positions + 1 # 0-8号位共9个位置

View File

@@ -147,7 +147,7 @@ class WorkstationBase(ABC):
def __init__(
self,
deck: Deck,
deck: Optional[Deck],
*args,
**kwargs, # 必须有kwargs
):
@@ -349,5 +349,5 @@ class WorkstationBase(ABC):
class ProtocolNode(WorkstationBase):
def __init__(self, deck: Optional[PLRResource], *args, **kwargs):
def __init__(self, protocol_type: List[str], deck: Optional[PLRResource], *args, **kwargs):
super().__init__(deck, *args, **kwargs)

View File

@@ -174,35 +174,6 @@ bioyond_dispensing_station:
title: query_resource_by_name参数
type: object
type: UniLabJsonCommand
auto-transfer_materials_to_reaction_station:
feedback: {}
goal: {}
goal_default:
target_device_id: null
transfer_groups: null
handles: {}
placeholder_keys: {}
result: {}
schema:
description: ''
properties:
feedback: {}
goal:
properties:
target_device_id:
type: string
transfer_groups:
type: array
required:
- target_device_id
- transfer_groups
type: object
result: {}
required:
- goal
title: transfer_materials_to_reaction_station参数
type: object
type: UniLabJsonCommand
auto-workflow_sample_locations:
feedback: {}
goal: {}

View File

@@ -0,0 +1,105 @@
cameracontroller_device:
category:
- cameraSII
class:
action_value_mappings:
auto-start:
feedback: {}
goal: {}
goal_default:
config: null
handles: {}
result: {}
schema:
description: ''
properties:
feedback: {}
goal:
properties:
config:
type: string
required: []
type: object
result: {}
required:
- goal
title: start参数
type: object
type: UniLabJsonCommand
auto-stop:
feedback: {}
goal: {}
goal_default: {}
handles: {}
result: {}
schema:
description: ''
properties:
feedback: {}
goal:
properties: {}
required: []
type: object
result: {}
required:
- goal
title: stop参数
type: object
type: UniLabJsonCommand
module: unilabos.devices.cameraSII.cameraUSB:CameraController
status_types:
status: dict
type: python
config_info: []
description: Uni-Lab-OS 摄像头驱动Linux USB 摄像头版,无 PTZ
handles: []
icon: ''
init_param_schema:
config:
properties:
audio_bitrate:
default: 64k
type: string
audio_device:
type: string
fps:
default: 30
type: integer
height:
default: 720
type: integer
host_id:
default: demo-host
type: string
rtmp_url:
default: rtmp://srs.sciol.ac.cn:4499/live/camera-01
type: string
signal_backend_url:
default: wss://sciol.ac.cn/api/realtime/signal/host
type: string
video_bitrate:
default: 1500k
type: string
video_device:
default: /dev/video0
type: string
webrtc_api:
default: https://srs.sciol.ac.cn/rtc/v1/play/
type: string
webrtc_stream_url:
default: webrtc://srs.sciol.ac.cn:4500/live/camera-01
type: string
width:
default: 1280
type: integer
required: []
type: object
data:
properties:
status:
type: object
required:
- status
type: object
registry_type: device
version: 1.0.0

View File

@@ -9333,7 +9333,34 @@ liquid_handler.prcxi:
touch_tip: false
use_channels:
- 0
handles: {}
handles:
input:
- data_key: liquid
data_source: handle
data_type: resource
handler_key: sources
label: sources
- data_key: liquid
data_source: executor
data_type: resource
handler_key: targets
label: targets
- data_key: liquid
data_source: executor
data_type: resource
handler_key: tip_rack
label: tip_rack
output:
- data_key: liquid
data_source: handle
data_type: resource
handler_key: sources_out
label: sources
- data_key: liquid
data_source: executor
data_type: resource
handler_key: targets_out
label: targets
placeholder_keys:
sources: unilabos_resources
targets: unilabos_resources

View File

@@ -6036,7 +6036,12 @@ workstation:
properties:
deck:
type: string
protocol_type:
items:
type: string
type: array
required:
- protocol_type
- deck
type: object
data:

View File

@@ -222,7 +222,7 @@ class Registry:
abs_path = Path(path).absolute()
resource_path = abs_path / "resources"
files = list(resource_path.glob("*/*.yaml"))
logger.debug(f"[UniLab Registry] resources: {resource_path.exists()}, total: {len(files)}")
logger.trace(f"[UniLab Registry] load resources? {resource_path.exists()}, total: {len(files)}")
current_resource_number = len(self.resource_type_registry) + 1
for i, file in enumerate(files):
with open(file, encoding="utf-8", mode="r") as f:

View File

@@ -2,7 +2,7 @@ container:
category:
- container
class:
module: unilabos.resources.container:RegularContainer
module: unilabos.resources.container:get_regular_container
type: pylabrobot
description: regular organic container
handles:

View File

@@ -22,6 +22,13 @@ class RegularContainer(Container):
def load_state(self, state: Dict[str, Any]):
self.state = state
def get_regular_container(name="container"):
r = RegularContainer(name=name)
r.category = "container"
return RegularContainer(name=name)
#
# class RegularContainer(object):
# # 第一个参数必须是id传入

View File

@@ -42,13 +42,16 @@ def canonicalize_nodes_data(
Returns:
ResourceTreeSet: 标准化后的资源树集合
"""
print_status(f"{len(nodes)} Resources loaded:", "info")
print_status(f"{len(nodes)} Resources loaded", "info")
# 第一步基本预处理处理graphml的label字段
for node in nodes:
outer_host_node_id = None
for idx, node in enumerate(nodes):
if node.get("label") is not None:
node_id = node.pop("label")
node["id"] = node["name"] = node_id
if node["id"] == "host_node":
outer_host_node_id = idx
if not isinstance(node.get("config"), dict):
node["config"] = {}
if not node.get("type"):
@@ -58,25 +61,26 @@ def canonicalize_nodes_data(
node["name"] = node.get("id")
print_status(f"Warning: Node {node.get('id', 'unknown')} missing 'name', defaulting to {node['name']}", "warning")
if not isinstance(node.get("position"), dict):
node["position"] = {"position": {}}
node["pose"] = {"position": {}}
x = node.pop("x", None)
if x is not None:
node["position"]["position"]["x"] = x
node["pose"]["position"]["x"] = x
y = node.pop("y", None)
if y is not None:
node["position"]["position"]["y"] = y
node["pose"]["position"]["y"] = y
z = node.pop("z", None)
if z is not None:
node["position"]["position"]["z"] = z
node["pose"]["position"]["z"] = z
if "sample_id" in node:
sample_id = node.pop("sample_id")
if sample_id:
logger.error(f"{node}的sample_id参数已弃用sample_id: {sample_id}")
for k in list(node.keys()):
if k not in ["id", "uuid", "name", "description", "schema", "model", "icon", "parent_uuid", "parent", "type", "class", "position", "config", "data", "children"]:
if k not in ["id", "uuid", "name", "description", "schema", "model", "icon", "parent_uuid", "parent", "type", "class", "position", "config", "data", "children", "pose"]:
v = node.pop(k)
node["config"][k] = v
if outer_host_node_id is not None:
nodes.pop(outer_host_node_id)
# 第二步处理parent_relation
id2idx = {node["id"]: idx for idx, node in enumerate(nodes)}
for parent, children in parent_relation.items():
@@ -93,7 +97,7 @@ def canonicalize_nodes_data(
for node in nodes:
try:
print_status(f"DeviceId: {node['id']}, Class: {node['class']}", "info")
# print_status(f"DeviceId: {node['id']}, Class: {node['class']}", "info")
# 使用标准化方法
resource_instance = ResourceDictInstance.get_resource_instance_from_dict(node)
known_nodes[node["id"]] = resource_instance
@@ -582,10 +586,14 @@ def resource_plr_to_ulab(resource_plr: "ResourcePLR", parent_name: str = None, w
"tip_rack": "tip_rack",
"warehouse": "warehouse",
"container": "container",
"tube": "tube",
"bottle_carrier": "bottle_carrier",
"plate_adapter": "plate_adapter",
}
if source in replace_info:
return replace_info[source]
else:
if source is not None:
logger.warning(f"转换pylabrobot的时候出现未知类型: {source}")
return source

View File

@@ -5,6 +5,7 @@ from unilabos.ros.msgs.message_converter import (
get_action_type,
)
from unilabos.ros.nodes.base_device_node import init_wrapper, ROS2DeviceNode
from unilabos.ros.nodes.resource_tracker import ResourceDictInstance
# 定义泛型类型变量
T = TypeVar("T")
@@ -18,12 +19,11 @@ class ROS2DeviceNodeWrapper(ROS2DeviceNode):
def ros2_device_node(
cls: Type[T],
device_config: Optional[Dict[str, Any]] = None,
device_config: Optional[ResourceDictInstance] = None,
status_types: Optional[Dict[str, Any]] = None,
action_value_mappings: Optional[Dict[str, Any]] = None,
hardware_interface: Optional[Dict[str, Any]] = None,
print_publish: bool = False,
children: Optional[Dict[str, Any]] = None,
) -> Type[ROS2DeviceNodeWrapper]:
"""Create a ROS2 Node class for a device class with properties and actions.
@@ -45,7 +45,7 @@ def ros2_device_node(
if status_types is None:
status_types = {}
if device_config is None:
device_config = {}
raise ValueError("device_config cannot be None")
if action_value_mappings is None:
action_value_mappings = {}
if hardware_interface is None:
@@ -82,7 +82,6 @@ def ros2_device_node(
action_value_mappings=action_value_mappings,
hardware_interface=hardware_interface,
print_publish=print_publish,
children=children,
*args,
**kwargs,
),

View File

@@ -4,13 +4,14 @@ from typing import Optional
from unilabos.registry.registry import lab_registry
from unilabos.ros.device_node_wrapper import ros2_device_node
from unilabos.ros.nodes.base_device_node import ROS2DeviceNode, DeviceInitError
from unilabos.ros.nodes.resource_tracker import ResourceDictInstance
from unilabos.utils import logger
from unilabos.utils.exception import DeviceClassInvalid
from unilabos.utils.import_manager import default_manager
def initialize_device_from_dict(device_id, device_config) -> Optional[ROS2DeviceNode]:
def initialize_device_from_dict(device_id, device_config: ResourceDictInstance) -> Optional[ROS2DeviceNode]:
"""Initializes a device based on its configuration.
This function dynamically imports the appropriate device class and creates an instance of it using the provided device configuration.
@@ -24,15 +25,14 @@ def initialize_device_from_dict(device_id, device_config) -> Optional[ROS2Device
None
"""
d = None
original_device_config = copy.deepcopy(device_config)
device_class_config = device_config["class"]
uid = device_config["uuid"]
device_class_config = device_config.res_content.klass
uid = device_config.res_content.uuid
if isinstance(device_class_config, str): # 如果是字符串则直接去lab_registry中查找获取class
if len(device_class_config) == 0:
raise DeviceClassInvalid(f"Device [{device_id}] class cannot be an empty string. {device_config}")
if device_class_config not in lab_registry.device_type_registry:
raise DeviceClassInvalid(f"Device [{device_id}] class {device_class_config} not found. {device_config}")
device_class_config = device_config["class"] = lab_registry.device_type_registry[device_class_config]["class"]
device_class_config = lab_registry.device_type_registry[device_class_config]["class"]
elif isinstance(device_class_config, dict):
raise DeviceClassInvalid(f"Device [{device_id}] class config should be type 'str' but 'dict' got. {device_config}")
if isinstance(device_class_config, dict):
@@ -41,17 +41,16 @@ def initialize_device_from_dict(device_id, device_config) -> Optional[ROS2Device
DEVICE = ros2_device_node(
DEVICE,
status_types=device_class_config.get("status_types", {}),
device_config=original_device_config,
device_config=device_config,
action_value_mappings=device_class_config.get("action_value_mappings", {}),
hardware_interface=device_class_config.get(
"hardware_interface",
{"name": "hardware_interface", "write": "send_command", "read": "read_data", "extra_info": []},
),
children=device_config.get("children", {})
)
)
try:
d = DEVICE(
device_id=device_id, device_uuid=uid, driver_is_ros=device_class_config["type"] == "ros2", driver_params=device_config.get("config", {})
device_id=device_id, device_uuid=uid, driver_is_ros=device_class_config["type"] == "ros2", driver_params=device_config.res_content.config
)
except DeviceInitError as ex:
return d

View File

@@ -192,7 +192,7 @@ def slave(
for device_config in devices_config.root_nodes:
device_id = device_config.res_content.id
if device_config.res_content.type == "device":
d = initialize_device_from_dict(device_id, device_config.get_nested_dict())
d = initialize_device_from_dict(device_id, device_config)
if d is not None:
devices_instances[device_id] = d
logger.info(f"Device {device_id} initialized.")

View File

@@ -48,7 +48,7 @@ from unilabos_msgs.msg import Resource # type: ignore
from unilabos.ros.nodes.resource_tracker import (
DeviceNodeResourceTracker,
ResourceTreeSet,
ResourceTreeInstance,
ResourceTreeInstance, ResourceDictInstance,
)
from unilabos.ros.x.rclpyx import get_event_loop
from unilabos.ros.utils.driver_creator import WorkstationNodeCreator, PyLabRobotCreator, DeviceClassCreator
@@ -133,12 +133,11 @@ def init_wrapper(
device_id: str,
device_uuid: str,
driver_class: type[T],
device_config: Dict[str, Any],
device_config: ResourceTreeInstance,
status_types: Dict[str, Any],
action_value_mappings: Dict[str, Any],
hardware_interface: Dict[str, Any],
print_publish: bool,
children: Optional[list] = None,
driver_params: Optional[Dict[str, Any]] = None,
driver_is_ros: bool = False,
*args,
@@ -147,8 +146,6 @@ def init_wrapper(
"""初始化设备节点的包装函数和ROS2DeviceNode初始化保持一致"""
if driver_params is None:
driver_params = kwargs.copy()
if children is None:
children = []
kwargs["device_id"] = device_id
kwargs["device_uuid"] = device_uuid
kwargs["driver_class"] = driver_class
@@ -157,7 +154,6 @@ def init_wrapper(
kwargs["status_types"] = status_types
kwargs["action_value_mappings"] = action_value_mappings
kwargs["hardware_interface"] = hardware_interface
kwargs["children"] = children
kwargs["print_publish"] = print_publish
kwargs["driver_is_ros"] = driver_is_ros
super(type(self), self).__init__(*args, **kwargs)
@@ -586,7 +582,7 @@ class BaseROS2DeviceNode(Node, Generic[T]):
except Exception as e:
self.lab_logger().error(f"更新资源uuid失败: {e}")
self.lab_logger().error(traceback.format_exc())
self.lab_logger().debug(f"资源更新结果: {response}")
self.lab_logger().trace(f"资源更新结果: {response}")
async def get_resource(self, resources_uuid: List[str], with_children: bool = True) -> ResourceTreeSet:
"""
@@ -1144,7 +1140,7 @@ class BaseROS2DeviceNode(Node, Generic[T]):
queried_resources = []
for resource_data in resource_inputs:
plr_resource = await self.get_resource_with_dir(
resource_ids=resource_data["id"], with_children=True
resource_id=resource_data["id"], with_children=True
)
queried_resources.append(plr_resource)
@@ -1168,7 +1164,6 @@ class BaseROS2DeviceNode(Node, Generic[T]):
execution_error = traceback.format_exc()
break
##### self.lab_logger().info(f"准备执行: {action_kwargs}, 函数: {ACTION.__name__}")
time_start = time.time()
time_overall = 100
future = None
@@ -1176,35 +1171,36 @@ class BaseROS2DeviceNode(Node, Generic[T]):
# 将阻塞操作放入线程池执行
if asyncio.iscoroutinefunction(ACTION):
try:
##### self.lab_logger().info(f"异步执行动作 {ACTION}")
future = ROS2DeviceNode.run_async_func(ACTION, trace_error=False, **action_kwargs)
def _handle_future_exception(fut):
self.lab_logger().trace(f"异步执行动作 {ACTION}")
def _handle_future_exception(fut: Future):
nonlocal execution_error, execution_success, action_return_value
try:
action_return_value = fut.result()
if isinstance(action_return_value, BaseException):
raise action_return_value
execution_success = True
except Exception as e:
except Exception as _:
execution_error = traceback.format_exc()
error(
f"异步任务 {ACTION.__name__} 报错了\n{traceback.format_exc()}\n原始输入:{action_kwargs}"
)
future = ROS2DeviceNode.run_async_func(ACTION, trace_error=False, **action_kwargs)
future.add_done_callback(_handle_future_exception)
except Exception as e:
execution_error = traceback.format_exc()
execution_success = False
self.lab_logger().error(f"创建异步任务失败: {traceback.format_exc()}")
else:
##### self.lab_logger().info(f"同步执行动作 {ACTION}")
self.lab_logger().trace(f"同步执行动作 {ACTION}")
future = self._executor.submit(ACTION, **action_kwargs)
def _handle_future_exception(fut):
def _handle_future_exception(fut: Future):
nonlocal execution_error, execution_success, action_return_value
try:
action_return_value = fut.result()
execution_success = True
except Exception as e:
except Exception as _:
execution_error = traceback.format_exc()
error(
f"同步任务 {ACTION.__name__} 报错了\n{traceback.format_exc()}\n原始输入:{action_kwargs}"
@@ -1309,7 +1305,7 @@ class BaseROS2DeviceNode(Node, Generic[T]):
get_result_info_str(execution_error, execution_success, action_return_value),
)
##### self.lab_logger().info(f"动作 {action_name} 完成并返回结果")
self.lab_logger().trace(f"动作 {action_name} 完成并返回结果")
return result_msg
return execute_callback
@@ -1544,17 +1540,29 @@ class ROS2DeviceNode:
这个类封装了设备类实例和ROS2节点的功能提供ROS2接口。
它不继承设备类,而是通过代理模式访问设备类的属性和方法。
"""
@staticmethod
async def safe_task_wrapper(trace_callback, func, **kwargs):
try:
if callable(trace_callback):
trace_callback(await func(**kwargs))
return await func(**kwargs)
except Exception as e:
if callable(trace_callback):
trace_callback(e)
return e
@classmethod
def run_async_func(cls, func, trace_error=True, **kwargs) -> Task:
def _handle_future_exception(fut):
def run_async_func(cls, func, trace_error=True, inner_trace_callback=None, **kwargs) -> Task:
def _handle_future_exception(fut: Future):
try:
fut.result()
ret = fut.result()
if isinstance(ret, BaseException):
raise ret
except Exception as e:
error(f"异步任务 {func.__name__} 报错了")
error(f"异步任务 {func.__name__} 获取结果失败")
error(traceback.format_exc())
future = rclpy.get_global_executor().create_task(func(**kwargs))
future = rclpy.get_global_executor().create_task(ROS2DeviceNode.safe_task_wrapper(inner_trace_callback, func, **kwargs))
if trace_error:
future.add_done_callback(_handle_future_exception)
return future
@@ -1582,12 +1590,11 @@ class ROS2DeviceNode:
device_id: str,
device_uuid: str,
driver_class: Type[T],
device_config: Dict[str, Any],
device_config: ResourceDictInstance,
driver_params: Dict[str, Any],
status_types: Dict[str, Any],
action_value_mappings: Dict[str, Any],
hardware_interface: Dict[str, Any],
children: Dict[str, Any],
print_publish: bool = True,
driver_is_ros: bool = False,
):
@@ -1598,7 +1605,7 @@ class ROS2DeviceNode:
device_id: 设备标识符
device_uuid: 设备uuid
driver_class: 设备类
device_config: 原始初始化的json
device_config: 原始初始化的ResourceDictInstance
driver_params: driver初始化的参数
status_types: 状态类型映射
action_value_mappings: 动作值映射
@@ -1612,6 +1619,7 @@ class ROS2DeviceNode:
self._has_async_context = hasattr(driver_class, "__aenter__") and hasattr(driver_class, "__aexit__")
self._driver_class = driver_class
self.device_config = device_config
children: List[ResourceDictInstance] = device_config.children
self.driver_is_ros = driver_is_ros
self.driver_is_workstation = False
self.resource_tracker = DeviceNodeResourceTracker()

View File

@@ -289,6 +289,12 @@ class HostNode(BaseROS2DeviceNode):
self.lab_logger().info("[Host Node] Host node initialized.")
HostNode._ready_event.set()
# 发送host_node ready信号到所有桥接器
for bridge in self.bridges:
if hasattr(bridge, "publish_host_ready"):
bridge.publish_host_ready()
self.lab_logger().debug(f"Host ready signal sent via {bridge.__class__.__name__}")
def _send_re_register(self, sclient):
sclient.wait_for_service()
request = SerialCommand.Request()
@@ -532,7 +538,7 @@ class HostNode(BaseROS2DeviceNode):
self.lab_logger().info(f"[Host Node] Initializing device: {device_id}")
try:
d = initialize_device_from_dict(device_id, device_config.get_nested_dict())
d = initialize_device_from_dict(device_id, device_config)
except DeviceClassInvalid as e:
self.lab_logger().error(f"[Host Node] Device class invalid: {e}")
d = None
@@ -712,7 +718,7 @@ class HostNode(BaseROS2DeviceNode):
feedback_callback=lambda feedback_msg: self.feedback_callback(item, action_id, feedback_msg),
goal_uuid=goal_uuid_obj,
)
future.add_done_callback(lambda future: self.goal_response_callback(item, action_id, future))
future.add_done_callback(lambda f: self.goal_response_callback(item, action_id, f))
def goal_response_callback(self, item: "QueueItem", action_id: str, future) -> None:
"""目标响应回调"""
@@ -723,9 +729,11 @@ class HostNode(BaseROS2DeviceNode):
self.lab_logger().info(f"[Host Node] Goal {action_id} ({item.job_id}) accepted")
self._goals[item.job_id] = goal_handle
goal_handle.get_result_async().add_done_callback(
lambda future: self.get_result_callback(item, action_id, future)
goal_future = goal_handle.get_result_async()
goal_future.add_done_callback(
lambda f: self.get_result_callback(item, action_id, f)
)
goal_future.result()
def feedback_callback(self, item: "QueueItem", action_id: str, feedback_msg) -> None:
"""反馈回调"""
@@ -791,6 +799,17 @@ class HostNode(BaseROS2DeviceNode):
del self._goals[job_id]
self.lab_logger().debug(f"[Host Node] Removed goal {job_id[:8]} from _goals")
# 存储结果供 HTTP API 查询
try:
from unilabos.app.web.controller import store_job_result
if goal_status == GoalStatus.STATUS_CANCELED:
store_job_result(job_id, status, return_info, {})
else:
store_job_result(job_id, status, return_info, result_data)
except ImportError:
pass # controller 模块可能未加载
# 发布状态到桥接器
if job_id:
for bridge in self.bridges:

View File

@@ -24,7 +24,7 @@ from unilabos.ros.msgs.message_converter import (
convert_from_ros_msg_with_mapping,
)
from unilabos.ros.nodes.base_device_node import BaseROS2DeviceNode, DeviceNodeResourceTracker, ROS2DeviceNode
from unilabos.ros.nodes.resource_tracker import ResourceTreeSet
from unilabos.ros.nodes.resource_tracker import ResourceTreeSet, ResourceDictInstance
from unilabos.utils.type_check import get_result_info_str
if TYPE_CHECKING:
@@ -47,7 +47,7 @@ class ROS2WorkstationNode(BaseROS2DeviceNode):
def __init__(
self,
protocol_type: List[str],
children: Dict[str, Any],
children: List[ResourceDictInstance],
*,
driver_instance: "WorkstationBase",
device_id: str,
@@ -81,10 +81,11 @@ class ROS2WorkstationNode(BaseROS2DeviceNode):
# 初始化子设备
self.communication_node_id_to_instance = {}
for device_id, device_config in self.children.items():
if device_config.get("type", "device") != "device":
for device_config in self.children:
device_id = device_config.res_content.id
if device_config.res_content.type != "device":
self.lab_logger().debug(
f"[Protocol Node] Skipping type {device_config['type']} {device_id} already existed, skipping."
f"[Protocol Node] Skipping type {device_config.res_content.type} {device_id} already existed, skipping."
)
continue
try:
@@ -101,8 +102,9 @@ class ROS2WorkstationNode(BaseROS2DeviceNode):
self.communication_node_id_to_instance[device_id] = d
continue
for device_id, device_config in self.children.items():
if device_config.get("type", "device") != "device":
for device_config in self.children:
device_id = device_config.res_content.id
if device_config.res_content.type != "device":
continue
# 设置硬件接口代理
if device_id not in self.sub_devices:

View File

@@ -1,9 +1,11 @@
import inspect
import traceback
import uuid
from pydantic import BaseModel, field_serializer, field_validator
from pydantic import Field
from typing import List, Tuple, Any, Dict, Literal, Optional, cast, TYPE_CHECKING, Union
from unilabos.resources.plr_additional_res_reg import register
from unilabos.utils.log import logger
if TYPE_CHECKING:
@@ -62,11 +64,10 @@ class ResourceDict(BaseModel):
parent: Optional["ResourceDict"] = Field(description="Parent resource object", default=None, exclude=True)
type: Union[Literal["device"], str] = Field(description="Resource type")
klass: str = Field(alias="class", description="Resource class name")
position: ResourceDictPosition = Field(description="Resource position", default_factory=ResourceDictPosition)
pose: ResourceDictPosition = Field(description="Resource position", default_factory=ResourceDictPosition)
config: Dict[str, Any] = Field(description="Resource configuration")
data: Dict[str, Any] = Field(description="Resource data")
extra: Dict[str, Any] = Field(description="Extra data")
data: Dict[str, Any] = Field(description="Resource data, eg: container liquid data")
extra: Dict[str, Any] = Field(description="Extra data, eg: slot index")
@field_serializer("parent_uuid")
def _serialize_parent(self, parent_uuid: Optional["ResourceDict"]):
@@ -146,15 +147,16 @@ class ResourceDictInstance(object):
if not content.get("extra"): # MagicCode
content["extra"] = {}
if "pose" not in content:
content["pose"] = content.get("position", {})
content["pose"] = content.pop("position", {})
return ResourceDictInstance(ResourceDict.model_validate(content))
def get_nested_dict(self) -> Dict[str, Any]:
def get_plr_nested_dict(self) -> Dict[str, Any]:
"""获取资源实例的嵌套字典表示"""
res_dict = self.res_content.model_dump(by_alias=True)
res_dict["children"] = {child.res_content.id: child.get_nested_dict() for child in self.children}
res_dict["children"] = {child.res_content.id: child.get_plr_nested_dict() for child in self.children}
res_dict["parent"] = self.res_content.parent_instance_name
res_dict["position"] = self.res_content.position.position.model_dump()
res_dict["position"] = self.res_content.pose.position.model_dump()
del res_dict["pose"]
return res_dict
@@ -429,9 +431,9 @@ class ResourceTreeSet(object):
Returns:
List[PLRResource]: PLR 资源实例列表
"""
register()
from pylabrobot.resources import Resource as PLRResource
from pylabrobot.utils.object_parsing import find_subclass
import inspect
# 类型映射
TYPE_MAP = {"plate": "Plate", "well": "Well", "deck": "Deck", "container": "RegularContainer"}
@@ -459,9 +461,9 @@ class ResourceTreeSet(object):
"size_y": res.config.get("size_y", 0),
"size_z": res.config.get("size_z", 0),
"location": {
"x": res.position.position.x,
"y": res.position.position.y,
"z": res.position.position.z,
"x": res.pose.position.x,
"y": res.pose.position.y,
"z": res.pose.position.z,
"type": "Coordinate",
},
"rotation": {"x": 0, "y": 0, "z": 0, "type": "Rotation"},

View File

@@ -9,10 +9,11 @@ import asyncio
import inspect
import traceback
from abc import abstractmethod
from typing import Type, Any, Dict, Optional, TypeVar, Generic
from typing import Type, Any, Dict, Optional, TypeVar, Generic, List
from unilabos.resources.graphio import nested_dict_to_list, resource_ulab_to_plr
from unilabos.ros.nodes.resource_tracker import DeviceNodeResourceTracker
from unilabos.ros.nodes.resource_tracker import DeviceNodeResourceTracker, ResourceTreeSet, ResourceDictInstance, \
ResourceTreeInstance
from unilabos.utils import logger, import_manager
from unilabos.utils.cls_creator import create_instance_from_config
@@ -33,7 +34,7 @@ class DeviceClassCreator(Generic[T]):
这个类提供了从任意类创建实例的通用方法。
"""
def __init__(self, cls: Type[T], children: Dict[str, Any], resource_tracker: DeviceNodeResourceTracker):
def __init__(self, cls: Type[T], children: List[ResourceDictInstance], resource_tracker: DeviceNodeResourceTracker):
"""
初始化设备类创建器
@@ -50,9 +51,9 @@ class DeviceClassCreator(Generic[T]):
附加资源到设备类实例
"""
if self.device_instance is not None:
for c in self.children.values():
if c["type"] != "device":
self.resource_tracker.add_resource(c)
for c in self.children:
if c.res_content.type != "device":
self.resource_tracker.add_resource(c.get_plr_nested_dict())
def create_instance(self, data: Dict[str, Any]) -> T:
"""
@@ -94,7 +95,7 @@ class PyLabRobotCreator(DeviceClassCreator[T]):
这个类提供了针对PyLabRobot设备类的实例创建方法特别处理deserialize方法。
"""
def __init__(self, cls: Type[T], children: Dict[str, Any], resource_tracker: DeviceNodeResourceTracker):
def __init__(self, cls: Type[T], children: List[ResourceDictInstance], resource_tracker: DeviceNodeResourceTracker):
"""
初始化PyLabRobot设备类创建器
@@ -111,12 +112,12 @@ class PyLabRobotCreator(DeviceClassCreator[T]):
def attach_resource(self):
pass # 只能增加实例化物料,原来默认物料仅为字典查询
def _process_resource_mapping(self, resource, source_type):
if source_type == dict:
from pylabrobot.resources.resource import Resource
return nested_dict_to_list(resource), Resource
return resource, source_type
# def _process_resource_mapping(self, resource, source_type):
# if source_type == dict:
# from pylabrobot.resources.resource import Resource
#
# return nested_dict_to_list(resource), Resource
# return resource, source_type
def _process_resource_references(
self, data: Any, to_dict=False, states=None, prefix_path="", name_to_uuid=None
@@ -142,15 +143,21 @@ class PyLabRobotCreator(DeviceClassCreator[T]):
if isinstance(data, dict):
if "_resource_child_name" in data:
child_name = data["_resource_child_name"]
if child_name in self.children:
resource = self.children[child_name]
resource: Optional[ResourceDictInstance] = None
for child in self.children:
if child.res_content.name == child_name:
resource = child
if resource is not None:
if "_resource_type" in data:
type_path = data["_resource_type"]
try:
target_type = import_manager.get_class(type_path)
contain_model = not issubclass(target_type, Deck)
resource, target_type = self._process_resource_mapping(resource, target_type)
resource_instance: Resource = resource_ulab_to_plr(resource, contain_model) # 带state
# target_type = import_manager.get_class(type_path)
# contain_model = not issubclass(target_type, Deck)
# resource, target_type = self._process_resource_mapping(resource, target_type)
res_tree = ResourceTreeInstance(resource)
res_tree_set = ResourceTreeSet([res_tree])
resource_instance: Resource = res_tree_set.to_plr_resources()[0]
# resource_instance: Resource = resource_ulab_to_plr(resource, contain_model) # 带state
states[prefix_path] = resource_instance.serialize_all_state()
# 使用 prefix_path 作为 key 存储资源状态
if to_dict:
@@ -202,12 +209,12 @@ class PyLabRobotCreator(DeviceClassCreator[T]):
stack = None
# 递归遍历 children 构建 name_to_uuid 映射
def collect_name_to_uuid(children_dict: Dict[str, Any], result: Dict[str, str]):
def collect_name_to_uuid(children_list: List[ResourceDictInstance], result: Dict[str, str]):
"""递归遍历嵌套的 children 字典,收集 name 到 uuid 的映射"""
for child in children_dict.values():
if isinstance(child, dict):
result[child["name"]] = child["uuid"]
collect_name_to_uuid(child["children"], result)
for child in children_list:
if isinstance(child, ResourceDictInstance):
result[child.res_content.name] = child.res_content.uuid
collect_name_to_uuid(child.children, result)
name_to_uuid = {}
collect_name_to_uuid(self.children, name_to_uuid)
@@ -313,7 +320,7 @@ class WorkstationNodeCreator(DeviceClassCreator[T]):
这个类提供了针对WorkstationNode设备类的实例创建方法处理children参数。
"""
def __init__(self, cls: Type[T], children: Dict[str, Any], resource_tracker: DeviceNodeResourceTracker):
def __init__(self, cls: Type[T], children: List[ResourceDictInstance], resource_tracker: DeviceNodeResourceTracker):
"""
初始化WorkstationNode设备类创建器
@@ -336,9 +343,9 @@ class WorkstationNodeCreator(DeviceClassCreator[T]):
try:
# 创建实例额外补充一个给protocol node的字段后面考虑取消
data["children"] = self.children
for material_id, child in self.children.items():
if child["type"] != "device":
self.resource_tracker.add_resource(self.children[material_id])
for child in self.children:
if child.res_content.type != "device":
self.resource_tracker.add_resource(child.get_plr_nested_dict())
deck_dict = data.get("deck")
if deck_dict:
from pylabrobot.resources import Deck, Resource

View File

@@ -1,94 +0,0 @@
import json
import sys
from datetime import datetime
from pathlib import Path
ROOT_DIR = Path(__file__).resolve().parents[2]
if str(ROOT_DIR) not in sys.path:
sys.path.insert(0, str(ROOT_DIR))
import pytest
from scripts.workflow import build_protocol_graph, draw_protocol_graph, draw_protocol_graph_with_ports
ROOT_DIR = Path(__file__).resolve().parents[2]
if str(ROOT_DIR) not in sys.path:
sys.path.insert(0, str(ROOT_DIR))
def _normalize_steps(data):
normalized = []
for step in data:
action = step.get("action") or step.get("operation")
if not action:
continue
raw_params = step.get("parameters") or step.get("action_args") or {}
params = dict(raw_params)
if "source" in raw_params and "sources" not in raw_params:
params["sources"] = raw_params["source"]
if "target" in raw_params and "targets" not in raw_params:
params["targets"] = raw_params["target"]
description = step.get("description") or step.get("purpose")
step_dict = {"action": action, "parameters": params}
if description:
step_dict["description"] = description
normalized.append(step_dict)
return normalized
def _normalize_labware(data):
labware = {}
for item in data:
reagent_name = item.get("reagent_name")
key = reagent_name or item.get("material_name") or item.get("name")
if not key:
continue
key = str(key)
idx = 1
original_key = key
while key in labware:
idx += 1
key = f"{original_key}_{idx}"
labware[key] = {
"slot": item.get("positions") or item.get("slot"),
"labware": item.get("material_name") or item.get("labware"),
"well": item.get("well", []),
"type": item.get("type", "reagent"),
"role": item.get("role", ""),
"name": key,
}
return labware
@pytest.mark.parametrize("protocol_name", [
"example_bio",
# "bioyond_materials_liquidhandling_1",
"example_prcxi",
])
def test_build_protocol_graph(protocol_name):
data_path = Path(__file__).with_name(f"{protocol_name}.json")
with data_path.open("r", encoding="utf-8") as fp:
d = json.load(fp)
if "workflow" in d and "reagent" in d:
protocol_steps = d["workflow"]
labware_info = d["reagent"]
elif "steps_info" in d and "labware_info" in d:
protocol_steps = _normalize_steps(d["steps_info"])
labware_info = _normalize_labware(d["labware_info"])
else:
raise ValueError("Unsupported protocol format")
graph = build_protocol_graph(
labware_info=labware_info,
protocol_steps=protocol_steps,
workstation_name="PRCXi",
)
timestamp = datetime.now().strftime("%Y%m%d_%H%M")
output_path = data_path.with_name(f"{protocol_name}_graph_{timestamp}.png")
draw_protocol_graph_with_ports(graph, str(output_path))
print(graph)

View File

@@ -124,11 +124,14 @@ class ColoredFormatter(logging.Formatter):
def _format_basic(self, record):
"""基本格式化,不包含颜色"""
datetime_str = datetime.fromtimestamp(record.created).strftime("%y-%m-%d [%H:%M:%S,%f")[:-3] + "]"
filename = os.path.basename(record.filename).rsplit(".", 1)[0] # 提取文件名(不含路径和扩展名)
filename = record.filename.replace(".py", "").split("\\")[-1] # 提取文件名(不含路径和扩展名)
if "/" in filename:
filename = filename.split("/")[-1]
module_path = f"{record.name}.{filename}"
func_line = f"{record.funcName}:{record.lineno}"
right_info = f" [{func_line}] [{module_path}]"
formatted_message = f"{datetime_str} [{record.levelname}] [{module_path}] [{func_line}]: {record.getMessage()}"
formatted_message = f"{datetime_str} [{record.levelname}] {record.getMessage()}{right_info}"
if record.exc_info:
exc_text = self.formatException(record.exc_info)
@@ -150,7 +153,7 @@ class ColoredFormatter(logging.Formatter):
# 配置日志处理器
def configure_logger(loglevel=None):
def configure_logger(loglevel=None, working_dir=None):
"""配置日志记录器
Args:
@@ -159,8 +162,9 @@ def configure_logger(loglevel=None):
"""
# 获取根日志记录器
root_logger = logging.getLogger()
root_logger.setLevel(TRACE_LEVEL)
# 设置日志级别
numeric_level = logging.DEBUG
if loglevel is not None:
if isinstance(loglevel, str):
# 将字符串转换为logging级别
@@ -170,12 +174,8 @@ def configure_logger(loglevel=None):
numeric_level = getattr(logging, loglevel.upper(), None)
if not isinstance(numeric_level, int):
print(f"警告: 无效的日志级别 '{loglevel}',使用默认级别 DEBUG")
numeric_level = logging.DEBUG
else:
numeric_level = loglevel
root_logger.setLevel(numeric_level)
else:
root_logger.setLevel(logging.DEBUG) # 默认级别
# 移除已存在的处理器
for handler in root_logger.handlers[:]:
@@ -183,7 +183,7 @@ def configure_logger(loglevel=None):
# 创建控制台处理器
console_handler = logging.StreamHandler()
console_handler.setLevel(root_logger.level) # 使用与根记录器相同的级别
console_handler.setLevel(numeric_level) # 使用与根记录器相同的级别
# 使用自定义的颜色格式化器
color_formatter = ColoredFormatter()
@@ -191,9 +191,30 @@ def configure_logger(loglevel=None):
# 添加处理器到根日志记录器
root_logger.addHandler(console_handler)
# 如果指定了工作目录,添加文件处理器
if working_dir is not None:
logs_dir = os.path.join(working_dir, "logs")
os.makedirs(logs_dir, exist_ok=True)
# 生成日志文件名:日期 时间.log
log_filename = datetime.now().strftime("%Y-%m-%d %H-%M-%S") + ".log"
log_filepath = os.path.join(logs_dir, log_filename)
# 创建文件处理器
file_handler = logging.FileHandler(log_filepath, encoding="utf-8")
file_handler.setLevel(TRACE_LEVEL)
# 使用不带颜色的格式化器
file_formatter = ColoredFormatter(use_colors=False)
file_handler.setFormatter(file_formatter)
root_logger.addHandler(file_handler)
logging.getLogger("asyncio").setLevel(logging.INFO)
logging.getLogger("urllib3").setLevel(logging.INFO)
# 配置日志系统
configure_logger()

View File

547
unilabos/workflow/common.py Normal file
View File

@@ -0,0 +1,547 @@
import re
import uuid
import networkx as nx
from networkx.drawing.nx_agraph import to_agraph
import matplotlib.pyplot as plt
from typing import Dict, List, Any, Tuple, Optional
Json = Dict[str, Any]
# ---------------- Graph ----------------
class WorkflowGraph:
"""简单的有向图实现:使用 params 单层参数inputs 内含连线;支持 node-link 导出"""
def __init__(self):
self.nodes: Dict[str, Dict[str, Any]] = {}
self.edges: List[Dict[str, Any]] = []
def add_node(self, node_id: str, **attrs):
self.nodes[node_id] = attrs
def add_edge(self, source: str, target: str, **attrs):
# 将 source_port/target_port 映射为服务端期望的 source_handle_key/target_handle_key
source_handle_key = attrs.pop("source_port", "") or attrs.pop("source_handle_key", "")
target_handle_key = attrs.pop("target_port", "") or attrs.pop("target_handle_key", "")
edge = {
"source": source,
"target": target,
"source_node_uuid": source,
"target_node_uuid": target,
"source_handle_key": source_handle_key,
"source_handle_io": attrs.pop("source_handle_io", "source"),
"target_handle_key": target_handle_key,
"target_handle_io": attrs.pop("target_handle_io", "target"),
**attrs,
}
self.edges.append(edge)
def _materialize_wiring_into_inputs(
self,
obj: Any,
inputs: Dict[str, Any],
variable_sources: Dict[str, Dict[str, Any]],
target_node_id: str,
base_path: List[str],
):
has_var = False
def walk(node: Any, path: List[str]):
nonlocal has_var
if isinstance(node, dict):
if "__var__" in node:
has_var = True
varname = node["__var__"]
placeholder = f"${{{varname}}}"
src = variable_sources.get(varname)
if src:
key = ".".join(path) # e.g. "params.foo.bar.0"
inputs[key] = {"node": src["node_id"], "output": src.get("output_name", "result")}
self.add_edge(
str(src["node_id"]),
target_node_id,
source_handle_io=src.get("output_name", "result"),
target_handle_io=key,
)
return placeholder
return {k: walk(v, path + [k]) for k, v in node.items()}
if isinstance(node, list):
return [walk(v, path + [str(i)]) for i, v in enumerate(node)]
return node
replaced = walk(obj, base_path[:])
return replaced, has_var
def add_workflow_node(
self,
node_id: int,
*,
device_key: Optional[str] = None, # 实例名,如 "ser"
resource_name: Optional[str] = None, # registry key原 device_class
module: Optional[str] = None,
template_name: Optional[str] = None, # 动作/模板名(原 action_key
params: Dict[str, Any],
variable_sources: Dict[str, Dict[str, Any]],
add_ready_if_no_vars: bool = True,
prev_node_id: Optional[int] = None,
**extra_attrs,
) -> None:
"""添加工作流节点params 单层;自动变量连线与 ready 串联;支持附加属性"""
node_id_str = str(node_id)
inputs: Dict[str, Any] = {}
params, has_var = self._materialize_wiring_into_inputs(
params, inputs, variable_sources, node_id_str, base_path=["params"]
)
if add_ready_if_no_vars and not has_var:
last_id = str(prev_node_id) if prev_node_id is not None else "-1"
inputs["ready"] = {"node": int(last_id), "output": "ready"}
self.add_edge(last_id, node_id_str, source_handle_io="ready", target_handle_io="ready")
node_obj = {
"device_key": device_key,
"resource_name": resource_name, # ✅ 新名字
"module": module,
"template_name": template_name, # ✅ 新名字
"params": params,
"inputs": inputs,
}
node_obj.update(extra_attrs or {})
self.add_node(node_id_str, parameters=node_obj)
# 顺序工作流导出(连线在 inputs不返回 edges
def to_dict(self) -> List[Dict[str, Any]]:
result = []
for node_id, attrs in self.nodes.items():
node = {"uuid": node_id}
params = dict(attrs.get("parameters", {}) or {})
flat = {k: v for k, v in attrs.items() if k != "parameters"}
flat.update(params)
node.update(flat)
result.append(node)
return sorted(result, key=lambda n: int(n["uuid"]) if str(n["uuid"]).isdigit() else n["uuid"])
# node-link 导出(含 edges
def to_node_link_dict(self) -> Dict[str, Any]:
nodes_list = []
for node_id, attrs in self.nodes.items():
node_attrs = attrs.copy()
params = node_attrs.pop("parameters", {}) or {}
node_attrs.update(params)
nodes_list.append({"uuid": node_id, **node_attrs})
return {
"directed": True,
"multigraph": False,
"graph": {},
"nodes": nodes_list,
"edges": self.edges,
"links": self.edges,
}
def refactor_data(
data: List[Dict[str, Any]],
action_resource_mapping: Optional[Dict[str, str]] = None,
) -> List[Dict[str, Any]]:
"""统一的数据重构函数,根据操作类型自动选择模板
Args:
data: 原始步骤数据列表
action_resource_mapping: action 到 resource_name 的映射字典,可选
"""
refactored_data = []
# 定义操作映射,包含生物实验和有机化学的所有操作
OPERATION_MAPPING = {
# 生物实验操作
"transfer_liquid": "transfer_liquid",
"transfer": "transfer",
"incubation": "incubation",
"move_labware": "move_labware",
"oscillation": "oscillation",
# 有机化学操作
"HeatChillToTemp": "HeatChillProtocol",
"StopHeatChill": "HeatChillStopProtocol",
"StartHeatChill": "HeatChillStartProtocol",
"HeatChill": "HeatChillProtocol",
"Dissolve": "DissolveProtocol",
"Transfer": "TransferProtocol",
"Evaporate": "EvaporateProtocol",
"Recrystallize": "RecrystallizeProtocol",
"Filter": "FilterProtocol",
"Dry": "DryProtocol",
"Add": "AddProtocol",
}
UNSUPPORTED_OPERATIONS = ["Purge", "Wait", "Stir", "ResetHandling"]
for step in data:
operation = step.get("action")
if not operation or operation in UNSUPPORTED_OPERATIONS:
continue
# 处理重复操作
if operation == "Repeat":
times = step.get("times", step.get("parameters", {}).get("times", 1))
sub_steps = step.get("steps", step.get("parameters", {}).get("steps", []))
for i in range(int(times)):
sub_data = refactor_data(sub_steps, action_resource_mapping)
refactored_data.extend(sub_data)
continue
# 获取模板名称
template_name = OPERATION_MAPPING.get(operation)
if not template_name:
# 自动推断模板类型
if operation.lower() in ["transfer", "incubation", "move_labware", "oscillation"]:
template_name = f"biomek-{operation}"
else:
template_name = f"{operation}Protocol"
# 获取 resource_name
resource_name = f"device.{operation.lower()}"
if action_resource_mapping:
resource_name = action_resource_mapping.get(operation, resource_name)
# 获取步骤编号,生成 name 字段
step_number = step.get("step_number")
name = f"Step {step_number}" if step_number is not None else None
# 创建步骤数据
step_data = {
"template_name": template_name,
"resource_name": resource_name,
"description": step.get("description", step.get("purpose", f"{operation} operation")),
"lab_node_type": "Device",
"param": step.get("parameters", step.get("action_args", {})),
"footer": f"{template_name}-{resource_name}",
}
if name:
step_data["name"] = name
refactored_data.append(step_data)
return refactored_data
def build_protocol_graph(
labware_info: List[Dict[str, Any]],
protocol_steps: List[Dict[str, Any]],
workstation_name: str,
action_resource_mapping: Optional[Dict[str, str]] = None,
) -> WorkflowGraph:
"""统一的协议图构建函数,根据设备类型自动选择构建逻辑
Args:
labware_info: labware 信息字典
protocol_steps: 协议步骤列表
workstation_name: 工作站名称
action_resource_mapping: action 到 resource_name 的映射字典,可选
"""
G = WorkflowGraph()
resource_last_writer = {}
protocol_steps = refactor_data(protocol_steps, action_resource_mapping)
# 有机化学&移液站协议图构建
WORKSTATION_ID = workstation_name
# 为所有labware创建资源节点
res_index = 0
for labware_id, item in labware_info.items():
# item_id = item.get("id") or item.get("name", f"item_{uuid.uuid4()}")
node_id = str(uuid.uuid4())
# 判断节点类型
if "Rack" in str(labware_id) or "Tip" in str(labware_id):
lab_node_type = "Labware"
description = f"Prepare Labware: {labware_id}"
liquid_type = []
liquid_volume = []
elif item.get("type") == "hardware" or "reactor" in str(labware_id).lower():
if "reactor" not in str(labware_id).lower():
continue
lab_node_type = "Sample"
description = f"Prepare Reactor: {labware_id}"
liquid_type = []
liquid_volume = []
else:
lab_node_type = "Reagent"
description = f"Add Reagent to Flask: {labware_id}"
liquid_type = [labware_id]
liquid_volume = [1e5]
res_index += 1
G.add_node(
node_id,
template_name="create_resource",
resource_name="host_node",
name=f"Res {res_index}",
description=description,
lab_node_type=lab_node_type,
footer="create_resource-host_node",
param={
"res_id": labware_id,
"device_id": WORKSTATION_ID,
"class_name": "container",
"parent": WORKSTATION_ID,
"bind_locations": {"x": 0.0, "y": 0.0, "z": 0.0},
"liquid_input_slot": [-1],
"liquid_type": liquid_type,
"liquid_volume": liquid_volume,
"slot_on_deck": "",
},
)
resource_last_writer[labware_id] = f"{node_id}:labware"
last_control_node_id = None
# 处理协议步骤
for step in protocol_steps:
node_id = str(uuid.uuid4())
G.add_node(node_id, **step)
# 控制流
if last_control_node_id is not None:
G.add_edge(last_control_node_id, node_id, source_port="ready", target_port="ready")
last_control_node_id = node_id
# 物料流
params = step.get("param", {})
input_resources_possible_names = [
"vessel",
"to_vessel",
"from_vessel",
"reagent",
"solvent",
"compound",
"sources",
"targets",
]
for target_port in input_resources_possible_names:
resource_name = params.get(target_port)
if resource_name and resource_name in resource_last_writer:
source_node, source_port = resource_last_writer[resource_name].split(":")
G.add_edge(source_node, node_id, source_port=source_port, target_port=target_port)
output_resources = {
"vessel_out": params.get("vessel"),
"from_vessel_out": params.get("from_vessel"),
"to_vessel_out": params.get("to_vessel"),
"filtrate_out": params.get("filtrate_vessel"),
"reagent": params.get("reagent"),
"solvent": params.get("solvent"),
"compound": params.get("compound"),
"sources_out": params.get("sources"),
"targets_out": params.get("targets"),
}
for source_port, resource_name in output_resources.items():
if resource_name:
resource_last_writer[resource_name] = f"{node_id}:{source_port}"
return G
def draw_protocol_graph(protocol_graph: WorkflowGraph, output_path: str):
"""
(辅助功能) 使用 networkx 和 matplotlib 绘制协议工作流图,用于可视化。
"""
if not protocol_graph:
print("Cannot draw graph: Graph object is empty.")
return
G = nx.DiGraph()
for node_id, attrs in protocol_graph.nodes.items():
label = attrs.get("description", attrs.get("template_name", node_id[:8]))
G.add_node(node_id, label=label, **attrs)
for edge in protocol_graph.edges:
G.add_edge(edge["source"], edge["target"])
plt.figure(figsize=(20, 15))
try:
pos = nx.nx_agraph.graphviz_layout(G, prog="dot")
except Exception:
pos = nx.shell_layout(G) # Fallback layout
node_labels = {node: data["label"] for node, data in G.nodes(data=True)}
nx.draw(
G,
pos,
with_labels=False,
node_size=2500,
node_color="skyblue",
node_shape="o",
edge_color="gray",
width=1.5,
arrowsize=15,
)
nx.draw_networkx_labels(G, pos, labels=node_labels, font_size=8, font_weight="bold")
plt.title("Chemical Protocol Workflow Graph", size=15)
plt.savefig(output_path, dpi=300, bbox_inches="tight")
plt.close()
print(f" - Visualization saved to '{output_path}'")
COMPASS = {"n", "e", "s", "w", "ne", "nw", "se", "sw", "c"}
def _is_compass(port: str) -> bool:
return isinstance(port, str) and port.lower() in COMPASS
def draw_protocol_graph_with_ports(protocol_graph, output_path: str, rankdir: str = "LR"):
"""
使用 Graphviz 端口语法绘制协议工作流图。
- 若边上的 source_port/target_port 是 compassn/e/s/w/...),直接用 compass。
- 否则自动为节点创建 record 形状并定义命名端口 <portname>。
最终由 PyGraphviz 渲染并输出到 output_path后缀决定格式如 .png/.svg/.pdf
"""
if not protocol_graph:
print("Cannot draw graph: Graph object is empty.")
return
# 1) 先用 networkx 搭建有向图,保留端口属性
G = nx.DiGraph()
for node_id, attrs in protocol_graph.nodes.items():
label = attrs.get("description", attrs.get("template_name", node_id[:8]))
# 保留一个干净的“中心标签”,用于放在 record 的中间槽
G.add_node(node_id, _core_label=str(label), **{k: v for k, v in attrs.items() if k not in ("label",)})
edges_data = []
in_ports_by_node = {} # 收集命名输入端口
out_ports_by_node = {} # 收集命名输出端口
for edge in protocol_graph.edges:
u = edge["source"]
v = edge["target"]
sp = edge.get("source_handle_key") or edge.get("source_port")
tp = edge.get("target_handle_key") or edge.get("target_port")
# 记录到图里(保留原始端口信息)
G.add_edge(u, v, source_handle_key=sp, target_handle_key=tp)
edges_data.append((u, v, sp, tp))
# 如果不是 compass就按“命名端口”先归类等会儿给节点造 record
if sp and not _is_compass(sp):
out_ports_by_node.setdefault(u, set()).add(str(sp))
if tp and not _is_compass(tp):
in_ports_by_node.setdefault(v, set()).add(str(tp))
# 2) 转为 AGraph使用 Graphviz 渲染
A = to_agraph(G)
A.graph_attr.update(rankdir=rankdir, splines="true", concentrate="false", fontsize="10")
A.node_attr.update(
shape="box", style="rounded,filled", fillcolor="lightyellow", color="#999999", fontname="Helvetica"
)
A.edge_attr.update(arrowsize="0.8", color="#666666")
# 3) 为需要命名端口的节点设置 record 形状与 label
# 左列 = 输入端口;中间 = 核心标签;右列 = 输出端口
for n in A.nodes():
node = A.get_node(n)
core = G.nodes[n].get("_core_label", n)
in_ports = sorted(in_ports_by_node.get(n, []))
out_ports = sorted(out_ports_by_node.get(n, []))
# 如果该节点涉及命名端口,则用 record否则保留原 box
if in_ports or out_ports:
def port_fields(ports):
if not ports:
return " " # 必须留一个空槽占位
# 每个端口一个小格子,<p> name
return "|".join(f"<{re.sub(r'[^A-Za-z0-9_:.|-]', '_', p)}> {p}" for p in ports)
left = port_fields(in_ports)
right = port_fields(out_ports)
# 三栏:左(入) | 中(节点名) | 右(出)
record_label = f"{{ {left} | {core} | {right} }}"
node.attr.update(shape="record", label=record_label)
else:
# 没有命名端口:普通盒子,显示核心标签
node.attr.update(label=str(core))
# 4) 给边设置 headport / tailport
# - 若端口为 compass直接用 compasse.g., headport="e"
# - 若端口为命名端口:使用在 record 中定义的 <port> 名(同名即可)
for u, v, sp, tp in edges_data:
e = A.get_edge(u, v)
# Graphviz 属性tail 是源head 是目标
if sp:
if _is_compass(sp):
e.attr["tailport"] = sp.lower()
else:
# 与 record label 中 <port> 名一致;特殊字符已在 label 中做了清洗
e.attr["tailport"] = re.sub(r"[^A-Za-z0-9_:.|-]", "_", str(sp))
if tp:
if _is_compass(tp):
e.attr["headport"] = tp.lower()
else:
e.attr["headport"] = re.sub(r"[^A-Za-z0-9_:.|-]", "_", str(tp))
# 可选:若想让边更贴边缘,可设置 constraint/spline 等
# e.attr["arrowhead"] = "vee"
# 5) 输出
A.draw(output_path, prog="dot")
print(f" - Port-aware workflow rendered to '{output_path}'")
# ---------------- Registry Adapter ----------------
class RegistryAdapter:
"""根据 module 的类名(冒号右侧)反查 registry 的 resource_name原 device_class并抽取参数顺序"""
def __init__(self, device_registry: Dict[str, Any]):
self.device_registry = device_registry or {}
self.module_class_to_resource = self._build_module_class_index()
def _build_module_class_index(self) -> Dict[str, str]:
idx = {}
for resource_name, info in self.device_registry.items():
module = info.get("module")
if isinstance(module, str) and ":" in module:
cls = module.split(":")[-1]
idx[cls] = resource_name
idx[cls.lower()] = resource_name
return idx
def resolve_resource_by_classname(self, class_name: str) -> Optional[str]:
if not class_name:
return None
return self.module_class_to_resource.get(class_name) or self.module_class_to_resource.get(class_name.lower())
def get_device_module(self, resource_name: Optional[str]) -> Optional[str]:
if not resource_name:
return None
return self.device_registry.get(resource_name, {}).get("module")
def get_actions(self, resource_name: Optional[str]) -> Dict[str, Any]:
if not resource_name:
return {}
return (self.device_registry.get(resource_name, {}).get("class", {}).get("action_value_mappings", {})) or {}
def get_action_schema(self, resource_name: Optional[str], template_name: str) -> Optional[Json]:
return (self.get_actions(resource_name).get(template_name) or {}).get("schema")
def get_action_goal_default(self, resource_name: Optional[str], template_name: str) -> Json:
return (self.get_actions(resource_name).get(template_name) or {}).get("goal_default", {}) or {}
def get_action_input_keys(self, resource_name: Optional[str], template_name: str) -> List[str]:
schema = self.get_action_schema(resource_name, template_name) or {}
goal = (schema.get("properties") or {}).get("goal") or {}
props = goal.get("properties") or {}
required = goal.get("required") or []
return list(dict.fromkeys(required + list(props.keys())))

View File

@@ -0,0 +1,356 @@
"""
JSON 工作流转换模块
提供从多种 JSON 格式转换为统一工作流格式的功能。
支持的格式:
1. workflow/reagent 格式
2. steps_info/labware_info 格式
"""
import json
from os import PathLike
from pathlib import Path
from typing import Any, Dict, List, Optional, Set, Tuple, Union
from unilabos.workflow.common import WorkflowGraph, build_protocol_graph
from unilabos.registry.registry import lab_registry
def get_action_handles(resource_name: str, template_name: str) -> Dict[str, List[str]]:
"""
从 registry 获取指定设备和动作的 handles 配置
Args:
resource_name: 设备资源名称,如 "liquid_handler.prcxi"
template_name: 动作模板名称,如 "transfer_liquid"
Returns:
包含 source 和 target handler_keys 的字典:
{"source": ["sources_out", "targets_out", ...], "target": ["sources", "targets", ...]}
"""
result = {"source": [], "target": []}
device_info = lab_registry.device_type_registry.get(resource_name, {})
if not device_info:
return result
action_mappings = device_info.get("class", {}).get("action_value_mappings", {})
action_config = action_mappings.get(template_name, {})
handles = action_config.get("handles", {})
if isinstance(handles, dict):
# 处理 input handles (作为 target)
for handle in handles.get("input", []):
handler_key = handle.get("handler_key", "")
if handler_key:
result["source"].append(handler_key)
# 处理 output handles (作为 source)
for handle in handles.get("output", []):
handler_key = handle.get("handler_key", "")
if handler_key:
result["target"].append(handler_key)
return result
def validate_workflow_handles(graph: WorkflowGraph) -> Tuple[bool, List[str]]:
"""
校验工作流图中所有边的句柄配置是否正确
Args:
graph: 工作流图对象
Returns:
(is_valid, errors): 是否有效,错误信息列表
"""
errors = []
nodes = graph.nodes
for edge in graph.edges:
left_uuid = edge.get("source")
right_uuid = edge.get("target")
# target_handle_key是target, right的输入节点入节点
# source_handle_key是source, left的输出节点出节点
right_source_conn_key = edge.get("target_handle_key", "")
left_target_conn_key = edge.get("source_handle_key", "")
# 获取源节点和目标节点信息
left_node = nodes.get(left_uuid, {})
right_node = nodes.get(right_uuid, {})
left_res_name = left_node.get("resource_name", "")
left_template_name = left_node.get("template_name", "")
right_res_name = right_node.get("resource_name", "")
right_template_name = right_node.get("template_name", "")
# 获取源节点的 output handles
left_node_handles = get_action_handles(left_res_name, left_template_name)
target_valid_keys = left_node_handles.get("target", [])
target_valid_keys.append("ready")
# 获取目标节点的 input handles
right_node_handles = get_action_handles(right_res_name, right_template_name)
source_valid_keys = right_node_handles.get("source", [])
source_valid_keys.append("ready")
# 如果节点配置了 output handles则 source_port 必须有效
if not right_source_conn_key:
node_name = left_node.get("name", left_uuid[:8])
errors.append(f"源节点 '{node_name}' 的 source_handle_key 为空," f"应设置为: {source_valid_keys}")
elif right_source_conn_key not in source_valid_keys:
node_name = left_node.get("name", left_uuid[:8])
errors.append(
f"源节点 '{node_name}' 的 source 端点 '{right_source_conn_key}' 不存在," f"支持的端点: {source_valid_keys}"
)
# 如果节点配置了 input handles则 target_port 必须有效
if not left_target_conn_key:
node_name = right_node.get("name", right_uuid[:8])
errors.append(f"目标节点 '{node_name}' 的 target_handle_key 为空," f"应设置为: {target_valid_keys}")
elif left_target_conn_key not in target_valid_keys:
node_name = right_node.get("name", right_uuid[:8])
errors.append(
f"目标节点 '{node_name}' 的 target 端点 '{left_target_conn_key}' 不存在,"
f"支持的端点: {target_valid_keys}"
)
return len(errors) == 0, errors
# action 到 resource_name 的映射
ACTION_RESOURCE_MAPPING: Dict[str, str] = {
# 生物实验操作
"transfer_liquid": "liquid_handler.prcxi",
"transfer": "liquid_handler.prcxi",
"incubation": "incubator.prcxi",
"move_labware": "labware_mover.prcxi",
"oscillation": "shaker.prcxi",
# 有机化学操作
"HeatChillToTemp": "heatchill.chemputer",
"StopHeatChill": "heatchill.chemputer",
"StartHeatChill": "heatchill.chemputer",
"HeatChill": "heatchill.chemputer",
"Dissolve": "stirrer.chemputer",
"Transfer": "liquid_handler.chemputer",
"Evaporate": "rotavap.chemputer",
"Recrystallize": "reactor.chemputer",
"Filter": "filter.chemputer",
"Dry": "dryer.chemputer",
"Add": "liquid_handler.chemputer",
}
def normalize_steps(data: List[Dict[str, Any]]) -> List[Dict[str, Any]]:
"""
将不同格式的步骤数据规范化为统一格式
支持的输入格式:
- action + parameters
- action + action_args
- operation + parameters
Args:
data: 原始步骤数据列表
Returns:
规范化后的步骤列表,格式为 [{"action": str, "parameters": dict, "description": str?, "step_number": int?}, ...]
"""
normalized = []
for idx, step in enumerate(data):
# 获取动作名称(支持 action 或 operation 字段)
action = step.get("action") or step.get("operation")
if not action:
continue
# 获取参数(支持 parameters 或 action_args 字段)
raw_params = step.get("parameters") or step.get("action_args") or {}
params = dict(raw_params)
# 规范化 source/target -> sources/targets
if "source" in raw_params and "sources" not in raw_params:
params["sources"] = raw_params["source"]
if "target" in raw_params and "targets" not in raw_params:
params["targets"] = raw_params["target"]
# 获取描述(支持 description 或 purpose 字段)
description = step.get("description") or step.get("purpose")
# 获取步骤编号(优先使用原始数据中的 step_number否则使用索引+1
step_number = step.get("step_number", idx + 1)
step_dict = {"action": action, "parameters": params, "step_number": step_number}
if description:
step_dict["description"] = description
normalized.append(step_dict)
return normalized
def normalize_labware(data: List[Dict[str, Any]]) -> Dict[str, Dict[str, Any]]:
"""
将不同格式的 labware 数据规范化为统一的字典格式
支持的输入格式:
- reagent_name + material_name + positions
- name + labware + slot
Args:
data: 原始 labware 数据列表
Returns:
规范化后的 labware 字典,格式为 {name: {"slot": int, "labware": str, "well": list, "type": str, "role": str, "name": str}, ...}
"""
labware = {}
for item in data:
# 获取 key 名称(优先使用 reagent_name其次是 material_name 或 name
reagent_name = item.get("reagent_name")
key = reagent_name or item.get("material_name") or item.get("name")
if not key:
continue
key = str(key)
# 处理重复 key自动添加后缀
idx = 1
original_key = key
while key in labware:
idx += 1
key = f"{original_key}_{idx}"
labware[key] = {
"slot": item.get("positions") or item.get("slot"),
"labware": item.get("material_name") or item.get("labware"),
"well": item.get("well", []),
"type": item.get("type", "reagent"),
"role": item.get("role", ""),
"name": key,
}
return labware
def convert_from_json(
data: Union[str, PathLike, Dict[str, Any]],
workstation_name: str = "PRCXi",
validate: bool = True,
) -> WorkflowGraph:
"""
从 JSON 数据或文件转换为 WorkflowGraph
支持的 JSON 格式:
1. {"workflow": [...], "reagent": {...}} - 直接格式
2. {"steps_info": [...], "labware_info": [...]} - 需要规范化的格式
Args:
data: JSON 文件路径、字典数据、或 JSON 字符串
workstation_name: 工作站名称,默认 "PRCXi"
validate: 是否校验句柄配置,默认 True
Returns:
WorkflowGraph: 构建好的工作流图
Raises:
ValueError: 不支持的 JSON 格式 或 句柄校验失败
FileNotFoundError: 文件不存在
json.JSONDecodeError: JSON 解析失败
"""
# 处理输入数据
if isinstance(data, (str, PathLike)):
path = Path(data)
if path.exists():
with path.open("r", encoding="utf-8") as fp:
json_data = json.load(fp)
elif isinstance(data, str):
# 尝试作为 JSON 字符串解析
json_data = json.loads(data)
else:
raise FileNotFoundError(f"文件不存在: {data}")
elif isinstance(data, dict):
json_data = data
else:
raise TypeError(f"不支持的数据类型: {type(data)}")
# 根据格式解析数据
if "workflow" in json_data and "reagent" in json_data:
# 格式1: workflow/reagent已经是规范格式
protocol_steps = json_data["workflow"]
labware_info = json_data["reagent"]
elif "steps_info" in json_data and "labware_info" in json_data:
# 格式2: steps_info/labware_info需要规范化
protocol_steps = normalize_steps(json_data["steps_info"])
labware_info = normalize_labware(json_data["labware_info"])
elif "steps" in json_data and "labware" in json_data:
# 格式3: steps/labware另一种常见格式
protocol_steps = normalize_steps(json_data["steps"])
if isinstance(json_data["labware"], list):
labware_info = normalize_labware(json_data["labware"])
else:
labware_info = json_data["labware"]
else:
raise ValueError(
"不支持的 JSON 格式。支持的格式:\n"
"1. {'workflow': [...], 'reagent': {...}}\n"
"2. {'steps_info': [...], 'labware_info': [...]}\n"
"3. {'steps': [...], 'labware': [...]}"
)
# 构建工作流图
graph = build_protocol_graph(
labware_info=labware_info,
protocol_steps=protocol_steps,
workstation_name=workstation_name,
action_resource_mapping=ACTION_RESOURCE_MAPPING,
)
# 校验句柄配置
if validate:
is_valid, errors = validate_workflow_handles(graph)
if not is_valid:
import warnings
for error in errors:
warnings.warn(f"句柄校验警告: {error}")
return graph
def convert_json_to_node_link(
data: Union[str, PathLike, Dict[str, Any]],
workstation_name: str = "PRCXi",
) -> Dict[str, Any]:
"""
将 JSON 数据转换为 node-link 格式的字典
Args:
data: JSON 文件路径、字典数据、或 JSON 字符串
workstation_name: 工作站名称,默认 "PRCXi"
Returns:
Dict: node-link 格式的工作流数据
"""
graph = convert_from_json(data, workstation_name)
return graph.to_node_link_dict()
def convert_json_to_workflow_list(
data: Union[str, PathLike, Dict[str, Any]],
workstation_name: str = "PRCXi",
) -> List[Dict[str, Any]]:
"""
将 JSON 数据转换为工作流列表格式
Args:
data: JSON 文件路径、字典数据、或 JSON 字符串
workstation_name: 工作站名称,默认 "PRCXi"
Returns:
List: 工作流节点列表
"""
graph = convert_from_json(data, workstation_name)
return graph.to_dict()
# 为了向后兼容,保留下划线前缀的别名
_normalize_steps = normalize_steps
_normalize_labware = normalize_labware

View File

@@ -0,0 +1,241 @@
import ast
import json
from typing import Dict, List, Any, Tuple, Optional
from .common import WorkflowGraph, RegistryAdapter
Json = Dict[str, Any]
# ---------------- Converter ----------------
class DeviceMethodConverter:
"""
- 字段统一resource_name原 device_class、template_name原 action_key
- params 单层inputs 使用 'params.' 前缀
- SimpleGraph.add_workflow_node 负责变量连线与边
"""
def __init__(self, device_registry: Optional[Dict[str, Any]] = None):
self.graph = WorkflowGraph()
self.variable_sources: Dict[str, Dict[str, Any]] = {} # var -> {node_id, output_name}
self.instance_to_resource: Dict[str, Optional[str]] = {} # 实例名 -> resource_name
self.node_id_counter: int = 0
self.registry = RegistryAdapter(device_registry or {})
# ---- helpers ----
def _new_node_id(self) -> int:
nid = self.node_id_counter
self.node_id_counter += 1
return nid
def _assign_targets(self, targets) -> List[str]:
names: List[str] = []
import ast
if isinstance(targets, ast.Tuple):
for elt in targets.elts:
if isinstance(elt, ast.Name):
names.append(elt.id)
elif isinstance(targets, ast.Name):
names.append(targets.id)
return names
def _extract_device_instantiation(self, node) -> Optional[Tuple[str, str]]:
import ast
if not isinstance(node.value, ast.Call):
return None
callee = node.value.func
if isinstance(callee, ast.Name):
class_name = callee.id
elif isinstance(callee, ast.Attribute) and isinstance(callee.value, ast.Name):
class_name = callee.attr
else:
return None
if isinstance(node.targets[0], ast.Name):
instance = node.targets[0].id
return instance, class_name
return None
def _extract_call(self, call) -> Tuple[str, str, Dict[str, Any], str]:
import ast
owner_name, method_name, call_kind = "", "", "func"
if isinstance(call.func, ast.Attribute):
method_name = call.func.attr
if isinstance(call.func.value, ast.Name):
owner_name = call.func.value.id
call_kind = "instance" if owner_name in self.instance_to_resource else "class_or_module"
elif isinstance(call.func.value, ast.Attribute) and isinstance(call.func.value.value, ast.Name):
owner_name = call.func.value.attr
call_kind = "class_or_module"
elif isinstance(call.func, ast.Name):
method_name = call.func.id
call_kind = "func"
def pack(node):
if isinstance(node, ast.Name):
return {"type": "variable", "value": node.id}
if isinstance(node, ast.Constant):
return {"type": "constant", "value": node.value}
if isinstance(node, ast.Dict):
return {"type": "dict", "value": self._parse_dict(node)}
if isinstance(node, ast.List):
return {"type": "list", "value": self._parse_list(node)}
return {"type": "raw", "value": ast.unparse(node) if hasattr(ast, "unparse") else str(node)}
args: Dict[str, Any] = {}
pos: List[Any] = []
for a in call.args:
pos.append(pack(a))
for kw in call.keywords:
args[kw.arg] = pack(kw.value)
if pos:
args["_positional"] = pos
return owner_name, method_name, args, call_kind
def _parse_dict(self, node) -> Dict[str, Any]:
import ast
out: Dict[str, Any] = {}
for k, v in zip(node.keys, node.values):
if isinstance(k, ast.Constant):
key = str(k.value)
if isinstance(v, ast.Name):
out[key] = f"var:{v.id}"
elif isinstance(v, ast.Constant):
out[key] = v.value
elif isinstance(v, ast.Dict):
out[key] = self._parse_dict(v)
elif isinstance(v, ast.List):
out[key] = self._parse_list(v)
return out
def _parse_list(self, node) -> List[Any]:
import ast
out: List[Any] = []
for elt in node.elts:
if isinstance(elt, ast.Name):
out.append(f"var:{elt.id}")
elif isinstance(elt, ast.Constant):
out.append(elt.value)
elif isinstance(elt, ast.Dict):
out.append(self._parse_dict(elt))
elif isinstance(elt, ast.List):
out.append(self._parse_list(elt))
return out
def _normalize_var_tokens(self, x: Any) -> Any:
if isinstance(x, str) and x.startswith("var:"):
return {"__var__": x[4:]}
if isinstance(x, list):
return [self._normalize_var_tokens(i) for i in x]
if isinstance(x, dict):
return {k: self._normalize_var_tokens(v) for k, v in x.items()}
return x
def _make_params_payload(self, resource_name: Optional[str], template_name: str, call_args: Dict[str, Any]) -> Dict[str, Any]:
input_keys = self.registry.get_action_input_keys(resource_name, template_name) if resource_name else []
defaults = self.registry.get_action_goal_default(resource_name, template_name) if resource_name else {}
params: Dict[str, Any] = dict(defaults)
def unpack(p):
t, v = p.get("type"), p.get("value")
if t == "variable":
return {"__var__": v}
if t == "dict":
return self._normalize_var_tokens(v)
if t == "list":
return self._normalize_var_tokens(v)
return v
for k, p in call_args.items():
if k == "_positional":
continue
params[k] = unpack(p)
pos = call_args.get("_positional", [])
if pos:
if input_keys:
for i, p in enumerate(pos):
if i >= len(input_keys):
break
name = input_keys[i]
if name in params:
continue
params[name] = unpack(p)
else:
for i, p in enumerate(pos):
params[f"arg_{i}"] = unpack(p)
return params
# ---- handlers ----
def _on_assign(self, stmt):
import ast
inst = self._extract_device_instantiation(stmt)
if inst:
instance, code_class = inst
resource_name = self.registry.resolve_resource_by_classname(code_class)
self.instance_to_resource[instance] = resource_name
return
if isinstance(stmt.value, ast.Call):
owner, method, call_args, kind = self._extract_call(stmt.value)
if kind == "instance":
device_key = owner
resource_name = self.instance_to_resource.get(owner)
else:
device_key = owner
resource_name = self.registry.resolve_resource_by_classname(owner)
module = self.registry.get_device_module(resource_name)
params = self._make_params_payload(resource_name, method, call_args)
nid = self._new_node_id()
self.graph.add_workflow_node(
nid,
device_key=device_key,
resource_name=resource_name, # ✅
module=module,
template_name=method, # ✅
params=params,
variable_sources=self.variable_sources,
add_ready_if_no_vars=True,
prev_node_id=(nid - 1) if nid > 0 else None,
)
out_vars = self._assign_targets(stmt.targets[0])
for var in out_vars:
self.variable_sources[var] = {"node_id": nid, "output_name": "result"}
def _on_expr(self, stmt):
import ast
if not isinstance(stmt.value, ast.Call):
return
owner, method, call_args, kind = self._extract_call(stmt.value)
if kind == "instance":
device_key = owner
resource_name = self.instance_to_resource.get(owner)
else:
device_key = owner
resource_name = self.registry.resolve_resource_by_classname(owner)
module = self.registry.get_device_module(resource_name)
params = self._make_params_payload(resource_name, method, call_args)
nid = self._new_node_id()
self.graph.add_workflow_node(
nid,
device_key=device_key,
resource_name=resource_name, # ✅
module=module,
template_name=method, # ✅
params=params,
variable_sources=self.variable_sources,
add_ready_if_no_vars=True,
prev_node_id=(nid - 1) if nid > 0 else None,
)
def convert(self, python_code: str):
tree = ast.parse(python_code)
for stmt in tree.body:
if isinstance(stmt, ast.Assign):
self._on_assign(stmt)
elif isinstance(stmt, ast.Expr):
self._on_expr(stmt)
return self

View File

@@ -0,0 +1,131 @@
from typing import List, Any, Dict
import xml.etree.ElementTree as ET
def convert_to_type(val: str) -> Any:
"""将字符串值转换为适当的数据类型"""
if val == "True":
return True
if val == "False":
return False
if val == "?":
return None
if val.endswith(" g"):
return float(val.split(" ")[0])
if val.endswith("mg"):
return float(val.split("mg")[0])
elif val.endswith("mmol"):
return float(val.split("mmol")[0]) / 1000
elif val.endswith("mol"):
return float(val.split("mol")[0])
elif val.endswith("ml"):
return float(val.split("ml")[0])
elif val.endswith("RPM"):
return float(val.split("RPM")[0])
elif val.endswith(" °C"):
return float(val.split(" ")[0])
elif val.endswith(" %"):
return float(val.split(" ")[0])
return val
def flatten_xdl_procedure(procedure_elem: ET.Element) -> List[ET.Element]:
"""展平嵌套的XDL程序结构"""
flattened_operations = []
TEMP_UNSUPPORTED_PROTOCOL = ["Purge", "Wait", "Stir", "ResetHandling"]
def extract_operations(element: ET.Element):
if element.tag not in ["Prep", "Reaction", "Workup", "Purification", "Procedure"]:
if element.tag not in TEMP_UNSUPPORTED_PROTOCOL:
flattened_operations.append(element)
for child in element:
extract_operations(child)
for child in procedure_elem:
extract_operations(child)
return flattened_operations
def parse_xdl_content(xdl_content: str) -> tuple:
"""解析XDL内容"""
try:
xdl_content_cleaned = "".join(c for c in xdl_content if c.isprintable())
root = ET.fromstring(xdl_content_cleaned)
synthesis_elem = root.find("Synthesis")
if synthesis_elem is None:
return None, None, None
# 解析硬件组件
hardware_elem = synthesis_elem.find("Hardware")
hardware = []
if hardware_elem is not None:
hardware = [{"id": c.get("id"), "type": c.get("type")} for c in hardware_elem.findall("Component")]
# 解析试剂
reagents_elem = synthesis_elem.find("Reagents")
reagents = []
if reagents_elem is not None:
reagents = [{"name": r.get("name"), "role": r.get("role", "")} for r in reagents_elem.findall("Reagent")]
# 解析程序
procedure_elem = synthesis_elem.find("Procedure")
if procedure_elem is None:
return None, None, None
flattened_operations = flatten_xdl_procedure(procedure_elem)
return hardware, reagents, flattened_operations
except ET.ParseError as e:
raise ValueError(f"Invalid XDL format: {e}")
def convert_xdl_to_dict(xdl_content: str) -> Dict[str, Any]:
"""
将XDL XML格式转换为标准的字典格式
Args:
xdl_content: XDL XML内容
Returns:
转换结果,包含步骤和器材信息
"""
try:
hardware, reagents, flattened_operations = parse_xdl_content(xdl_content)
if hardware is None:
return {"error": "Failed to parse XDL content", "success": False}
# 将XDL元素转换为字典格式
steps_data = []
for elem in flattened_operations:
# 转换参数类型
parameters = {}
for key, val in elem.attrib.items():
converted_val = convert_to_type(val)
if converted_val is not None:
parameters[key] = converted_val
step_dict = {
"operation": elem.tag,
"parameters": parameters,
"description": elem.get("purpose", f"Operation: {elem.tag}"),
}
steps_data.append(step_dict)
# 合并硬件和试剂为统一的labware_info格式
labware_data = []
labware_data.extend({"id": hw["id"], "type": "hardware", **hw} for hw in hardware)
labware_data.extend({"name": reagent["name"], "type": "reagent", **reagent} for reagent in reagents)
return {
"success": True,
"steps": steps_data,
"labware": labware_data,
"message": f"Successfully converted XDL to dict format. Found {len(steps_data)} steps and {len(labware_data)} labware items.",
}
except Exception as e:
error_msg = f"XDL conversion failed: {str(e)}"
return {"error": error_msg, "success": False}

View File

@@ -0,0 +1,138 @@
"""
工作流工具模块
提供工作流上传等功能
"""
import json
import os
import uuid
from typing import Any, Dict, List, Optional
from unilabos.utils.banner_print import print_status
def _is_node_link_format(data: Dict[str, Any]) -> bool:
"""检查数据是否为 node-link 格式"""
return "nodes" in data and "edges" in data
def _convert_to_node_link(workflow_file: str, workflow_data: Dict[str, Any]) -> Dict[str, Any]:
"""
将非 node-link 格式的工作流数据转换为 node-link 格式
Args:
workflow_file: 工作流文件路径(用于日志)
workflow_data: 原始工作流数据
Returns:
node-link 格式的工作流数据
"""
from unilabos.workflow.convert_from_json import convert_json_to_node_link
print_status(f"检测到非 node-link 格式,正在转换...", "info")
node_link_data = convert_json_to_node_link(workflow_data)
print_status(f"转换完成", "success")
return node_link_data
def upload_workflow(
workflow_file: str,
workflow_name: Optional[str] = None,
tags: Optional[List[str]] = None,
published: bool = False,
) -> Dict[str, Any]:
"""
上传工作流到服务器
支持的输入格式:
1. node-link 格式: {"nodes": [...], "edges": [...]}
2. workflow/reagent 格式: {"workflow": [...], "reagent": {...}}
3. steps_info/labware_info 格式: {"steps_info": [...], "labware_info": [...]}
4. steps/labware 格式: {"steps": [...], "labware": [...]}
Args:
workflow_file: 工作流文件路径JSON格式
workflow_name: 工作流名称,如果不提供则从文件中读取或使用文件名
tags: 工作流标签列表,默认为空列表
published: 是否发布工作流默认为False
Returns:
Dict: API响应数据
"""
# 延迟导入,避免在配置文件加载之前初始化 http_client
from unilabos.app.web import http_client
if not os.path.exists(workflow_file):
print_status(f"工作流文件不存在: {workflow_file}", "error")
return {"code": -1, "message": f"文件不存在: {workflow_file}"}
# 读取工作流文件
try:
with open(workflow_file, "r", encoding="utf-8") as f:
workflow_data = json.load(f)
except json.JSONDecodeError as e:
print_status(f"工作流文件JSON解析失败: {e}", "error")
return {"code": -1, "message": f"JSON解析失败: {e}"}
# 自动检测并转换格式
if not _is_node_link_format(workflow_data):
try:
workflow_data = _convert_to_node_link(workflow_file, workflow_data)
except Exception as e:
print_status(f"工作流格式转换失败: {e}", "error")
return {"code": -1, "message": f"格式转换失败: {e}"}
# 提取工作流数据
nodes = workflow_data.get("nodes", [])
edges = workflow_data.get("edges", [])
workflow_uuid_val = workflow_data.get("workflow_uuid", str(uuid.uuid4()))
wf_name_from_file = workflow_data.get("workflow_name", os.path.basename(workflow_file).replace(".json", ""))
# 确定工作流名称
final_name = workflow_name or wf_name_from_file
print_status(f"正在上传工作流: {final_name}", "info")
print_status(f" - 节点数量: {len(nodes)}", "info")
print_status(f" - 边数量: {len(edges)}", "info")
print_status(f" - 标签: {tags or []}", "info")
print_status(f" - 发布状态: {published}", "info")
# 调用 http_client 上传
result = http_client.workflow_import(
name=final_name,
workflow_uuid=workflow_uuid_val,
workflow_name=final_name,
nodes=nodes,
edges=edges,
tags=tags,
published=published,
)
if result.get("code") == 0:
data = result.get("data", {})
print_status("工作流上传成功!", "success")
print_status(f" - UUID: {data.get('uuid', 'N/A')}", "info")
print_status(f" - 名称: {data.get('name', 'N/A')}", "info")
else:
print_status(f"工作流上传失败: {result.get('message', '未知错误')}", "error")
return result
def handle_workflow_upload_command(args_dict: Dict[str, Any]) -> None:
"""
处理 workflow_upload 子命令
Args:
args_dict: 命令行参数字典
"""
workflow_file = args_dict.get("workflow_file")
workflow_name = args_dict.get("workflow_name")
tags = args_dict.get("tags", [])
published = args_dict.get("published", False)
if workflow_file:
upload_workflow(workflow_file, workflow_name, tags, published)
else:
print_status("未指定工作流文件路径,请使用 -f/--workflow_file 参数", "error")

View File

@@ -2,7 +2,7 @@
<?xml-model href="http://download.ros.org/schema/package_format3.xsd" schematypens="http://www.w3.org/2001/XMLSchema"?>
<package format="3">
<name>unilabos_msgs</name>
<version>0.10.11</version>
<version>0.10.12</version>
<description>ROS2 Messages package for unilabos devices</description>
<maintainer email="changjh@pku.edu.cn">Junhan Chang</maintainer>
<maintainer email="18435084+Xuwznln@users.noreply.github.com">Xuwznln</maintainer>