Compare commits

..

11 Commits

Author SHA1 Message Date
Xuwznln
2cf58ca452 Upgrade to py 3.11.14; ros 0.7; unilabos 0.10.16 2026-01-26 16:47:54 +08:00
Xuwznln
fd73bb7dcb CI Check Fix 5 2026-01-26 08:47:27 +08:00
Xuwznln
a02cecfd18 CI Check Fix 4 2026-01-26 08:20:17 +08:00
Xuwznln
d6accc3f1c CI Check Fix 3 2026-01-26 08:14:21 +08:00
Xuwznln
39dc443399 CI Check Fix 2 2026-01-26 02:23:40 +08:00
Xuwznln
37b1fca962 CI Check Fix 1 2026-01-26 02:22:21 +08:00
Xuwznln
216f19fb62 Workbench example, adjust log level, and ci check (#220)
* TestLatency Return Value Example & gitignore update

* Adjust log level & Add workbench virtual example & Add not action decorator & Add check_mode &

* Add CI Check
2026-01-26 02:15:13 +08:00
Xuwznln
ec7ca6a1fe Fix/workstation yb revision (#217)
* Revert log change & update registry

* Revert opcua client & move electrolyte node
2026-01-17 16:50:20 +08:00
Xuwznln
4c8022ee95 Workstation yb merge dev ready 260113 (#216)
* feat(bioyond): 添加计算实验设计功能,支持化合物配比和滴定比例参数

* feat(bioyond): 添加测量小瓶功能,支持基本参数配置

* feat(bioyond): 添加测量小瓶配置,支持新设备参数

* feat(bioyond): 更新仓库布局和尺寸,支持竖向排列的测量小瓶和试剂存放堆栈

* feat(bioyond): 优化任务创建流程,确保无论成功与否都清理任务队列以避免重复累积

* feat(bioyond): 添加设置反应器温度功能,支持温度范围和异常处理

* feat(bioyond): 调整反应器位置配置,统一坐标格式

* feat(bioyond): 添加调度器启动功能,支持任务队列执行并处理异常

* feat(bioyond): 优化调度器启动功能,添加异常处理并更新相关配置

* feat(opcua): 增强节点ID解析兼容性和数据类型处理

改进节点ID解析逻辑以支持多种格式,包括字符串和数字标识符
添加数据类型转换处理,确保写入值时类型匹配
优化错误提示信息,便于调试节点连接问题

* feat(registry): 新增后处理站的设备配置文件

添加后处理站的YAML配置文件,包含动作映射、状态类型和设备描述

* 添加调度器启动功能,合并物料参数配置,优化物料参数处理逻辑

* 添加从 Bioyond 系统自动同步工作流序列的功能,并更新相关配置

* fix:兼容 BioyondReactionStation 中 workflow_sequence 被重写为 property

* fix:同步工作流序列

* feat: remove commented workflow synchronization from `reaction_station.py`.

* 添加时间约束功能及相关配置

* fix:自动更新物料缓存功能,添加物料时更新缓存并在删除时移除缓存项

* fix:在添加物料时处理字符串和字典返回值,确保正确更新缓存

* fix:更新奔曜错误处理报送为物料变更报送,调整日志记录和响应消息

* feat:添加实验报告简化功能,去除冗余信息并保留关键信息

* feat: 添加任务状态事件发布功能,监控并报告任务运行、超时、完成和错误状态

* fix: 修复添加物料时数据格式错误

* Refactor bioyond_dispensing_station and reaction_station_bioyond YAML configurations

- Removed redundant action value mappings from bioyond_dispensing_station.
- Updated goal properties in bioyond_dispensing_station to use enums for target_stack and other parameters.
- Changed data types for end_point and start_point in reaction_station_bioyond to use string enums (Start, End).
- Simplified descriptions and updated measurement units from μL to mL where applicable.
- Removed unused commands from reaction_station_bioyond to streamline the configuration.

* fix:Change the material unit from μL to mL

* fix:refresh_material_cache

* feat: 动态获取工作流步骤ID,优化工作流配置

* feat: 添加清空服务端所有非核心工作流功能

* fix:修复Bottle类的序列化和反序列化方法

* feat:增强材料缓存更新逻辑,支持处理返回数据中的详细信息

* Add debug log

* feat(workstation): update bioyond config migration and coin cell material search logic

- Migrate bioyond_cell config to JSON structure and remove global variable dependencies
- Implement material search confirmation dialog auto-handling
- Add documentation: 20260113_物料搜寻确认弹窗自动处理功能.md and 20260113_配置迁移修改总结.md

* Refactor module paths for Bioyond devices in YAML configuration files

- Updated the module path for BioyondDispensingStation in bioyond_dispensing_station.yaml to reflect the new directory structure.
- Updated the module path for BioyondReactionStation and BioyondReactor in reaction_station_bioyond.yaml to align with the revised organization of the codebase.

* fix: WareHouse 的不可哈希类型错误,优化父节点去重逻辑

* refactor: Move config from module to instance initialization

* fix: 修正 reaction_station 目录名拼写错误

* feat: Integrate material search logic and cleanup deprecated files

- Update coin_cell_assembly.py with material search dialog handling
- Update YB_warehouses.py with latest warehouse configurations
- Remove outdated documentation and test data files

* Refactor: Use instance attributes for action names and workflow step IDs

* refactor: Split tipbox storage into left and right warehouses

* refactor: Merge tipbox storage left and right into single warehouse

---------

Co-authored-by: ZiWei <131428629+ZiWei09@users.noreply.github.com>
Co-authored-by: Andy6M <xieqiming1132@qq.com>
2026-01-17 15:44:18 +08:00
ZiWei
ad21644db0 fix: WareHouse 的不可哈希类型错误,优化父节点去重逻辑 2026-01-14 20:15:05 +08:00
Xuwznln
9dfd58e9af fix parent_uuid fetch when bind_parent_id == node_name 2026-01-14 14:17:29 +08:00
37 changed files with 3516 additions and 7478 deletions

View File

@@ -1,9 +0,0 @@
@echo off
setlocal enabledelayedexpansion
REM upgrade pip
"%PREFIX%\python.exe" -m pip install --upgrade pip
REM install extra deps
"%PREFIX%\python.exe" -m pip install paho-mqtt opentrons_shared_data
"%PREFIX%\python.exe" -m pip install git+https://github.com/Xuwznln/pylabrobot.git

View File

@@ -1,9 +0,0 @@
#!/usr/bin/env bash
set -euxo pipefail
# make sure pip is available
"$PREFIX/bin/python" -m pip install --upgrade pip
# install extra deps
"$PREFIX/bin/python" -m pip install paho-mqtt opentrons_shared_data
"$PREFIX/bin/python" -m pip install git+https://github.com/Xuwznln/pylabrobot.git

View File

@@ -1,26 +0,0 @@
.conda
# .github
.idea
# .vscode
output
pylabrobot_repo
recipes
scripts
service
temp
# unilabos/test
# unilabos/app/web
unilabos/device_mesh
unilabos_data
unilabos_msgs
unilabos.egg-info
CONTRIBUTORS
# LICENSE
MANIFEST.in
pyrightconfig.json
# README.md
# README_zh.md
setup.py
setup.cfg
.gitattrubutes
**/__pycache__

19
.github/dependabot.yml vendored Normal file
View File

@@ -0,0 +1,19 @@
version: 2
updates:
# GitHub Actions
- package-ecosystem: "github-actions"
directory: "/"
target-branch: "dev"
schedule:
interval: "weekly"
day: "monday"
time: "06:00"
open-pull-requests-limit: 5
reviewers:
- "msgcenterpy-team"
labels:
- "dependencies"
- "github-actions"
commit-message:
prefix: "ci"
include: "scope"

104
.github/workflows/ci-check.yml vendored Normal file
View File

@@ -0,0 +1,104 @@
name: CI Check
on:
push:
branches: [main, dev]
pull_request:
branches: [main, dev]
jobs:
registry-check:
runs-on: ubuntu-latest
defaults:
run:
shell: bash -l {0}
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Setup Miniforge
uses: conda-incubator/setup-miniconda@v3
with:
miniforge-version: latest
use-mamba: true
channels: robostack-staging,conda-forge,uni-lab
channel-priority: flexible
activate-environment: check-env
auto-activate-base: false
auto-update-conda: false
show-channel-urls: true
- name: Install ROS dependencies and unilabos-msgs
run: |
# Install all packages together for proper dependency resolution
# Use mamba for faster and more reliable solving
mamba install -n check-env \
python=3.11.14 \
robostack-staging::ros-humble-ros-core \
robostack-staging::ros-humble-action-msgs \
robostack-staging::ros-humble-std-msgs \
robostack-staging::ros-humble-geometry-msgs \
robostack-staging::ros-humble-control-msgs \
robostack-staging::ros-humble-nav2-msgs \
uni-lab::ros-humble-unilabos-msgs \
robostack-staging::ros-humble-cv-bridge \
robostack-staging::ros-humble-vision-opencv \
robostack-staging::ros-humble-tf-transformations \
robostack-staging::ros-humble-moveit-msgs \
robostack-staging::ros-humble-tf2-ros \
robostack-staging::ros-humble-tf2-ros-py \
conda-forge::transforms3d \
-c robostack-staging -c conda-forge -c uni-lab -y
- name: Install pip dependencies and unilabos
run: |
# Activate the environment
conda activate check-env
# Core dependencies for devices
pip install uv
uv pip install networkx \
typing_extensions \
websockets \
msgcenterpy \
opentrons_shared_data \
pint \
fastapi \
jinja2 \
requests \
uvicorn \
git+https://github.com/Xuwznln/pylabrobot.git \
opencv-python \
pyautogui \
opcua \
pyserial \
pandas \
crcmod-plus \
pymodbus \
pywinauto_recorder \
matplotlib \
# PyLabRobot (custom fork)
pip install
# Install unilabos in editable mode
pip install -e .
- name: Run check mode (complete_registry)
run: |
conda activate check-env
python -m unilabos --check_mode --skip_env_check
- name: Check for uncommitted changes
run: |
if ! git diff --exit-code; then
echo "::error::检测到文件变化!请先在本地运行 'python -m unilabos --complete_registry' 并提交变更"
echo "变化的文件:"
git diff --name-only
exit 1
fi
echo "检查通过:无文件变化"

1
.gitignore vendored
View File

@@ -4,6 +4,7 @@ temp/
output/
unilabos_data/
pyrightconfig.json
.cursorignore
## Python
# Byte-compiled / optimized / DLL files

View File

@@ -1,6 +1,6 @@
package:
name: ros-humble-unilabos-msgs
version: 0.10.15
version: 0.10.16
source:
path: ../../unilabos_msgs
target_directory: src
@@ -25,7 +25,7 @@ requirements:
build:
- ${{ compiler('cxx') }}
- ${{ compiler('c') }}
- python ==3.11.11
- python ==3.11.14
- numpy
- if: build_platform != target_platform
then:
@@ -63,14 +63,14 @@ requirements:
- robostack-staging::ros-humble-rosidl-default-generators
- robostack-staging::ros-humble-std-msgs
- robostack-staging::ros-humble-geometry-msgs
- robostack-staging::ros2-distro-mutex=0.6
- robostack-staging::ros2-distro-mutex=0.7
run:
- robostack-staging::ros-humble-action-msgs
- robostack-staging::ros-humble-ros-workspace
- robostack-staging::ros-humble-rosidl-default-runtime
- robostack-staging::ros-humble-std-msgs
- robostack-staging::ros-humble-geometry-msgs
- robostack-staging::ros2-distro-mutex=0.6
- robostack-staging::ros2-distro-mutex=0.7
- if: osx and x86_64
then:
- __osx >=${{ MACOSX_DEPLOYMENT_TARGET|default('10.14') }}

View File

@@ -1,6 +1,6 @@
package:
name: unilabos
version: "0.10.15"
version: "0.10.16"
source:
path: ../..

View File

@@ -4,7 +4,7 @@ package_name = 'unilabos'
setup(
name=package_name,
version='0.10.15',
version='0.10.16',
packages=find_packages(),
include_package_data=True,
install_requires=['setuptools'],

View File

@@ -1 +1 @@
__version__ = "0.10.15"
__version__ = "0.10.16"

6
unilabos/__main__.py Normal file
View File

@@ -0,0 +1,6 @@
"""Entry point for `python -m unilabos`."""
from unilabos.app.main import main
if __name__ == "__main__":
main()

View File

@@ -161,6 +161,12 @@ def parse_args():
default=False,
help="Complete registry information",
)
parser.add_argument(
"--check_mode",
action="store_true",
default=False,
help="Run in check mode for CI: validates registry imports and ensures no file changes",
)
parser.add_argument(
"--no_update_feedback",
action="store_true",
@@ -314,6 +320,12 @@ def main():
BasicConfig.machine_name = machine_name
BasicConfig.vis_2d_enable = args_dict["2d_vis"]
# Check mode 处理
check_mode = args_dict.get("check_mode", False)
BasicConfig.check_mode = check_mode
if check_mode:
print_status("Check mode 启用,将进行 complete_registry 检查", "info")
from unilabos.resources.graphio import (
read_node_link_json,
read_graphml,
@@ -331,10 +343,14 @@ def main():
# 显示启动横幅
print_unilab_banner(args_dict)
# 注册表
lab_registry = build_registry(
args_dict["registry_path"], args_dict.get("complete_registry", False), BasicConfig.upload_registry
)
# 注册表 - check_mode 时强制启用 complete_registry
complete_registry = args_dict.get("complete_registry", False) or check_mode
lab_registry = build_registry(args_dict["registry_path"], complete_registry, BasicConfig.upload_registry)
# Check mode: complete_registry 完成后直接退出git diff 检测由 CI workflow 执行
if check_mode:
print_status("Check mode: complete_registry 完成,退出", "info")
os._exit(0)
if BasicConfig.upload_registry:
# 设备注册到服务端 - 需要 ak 和 sk

View File

@@ -58,14 +58,14 @@ class JobResultStore:
feedback=feedback or {},
timestamp=time.time(),
)
logger.debug(f"[JobResultStore] Stored result for job {job_id[:8]}, status={status}")
logger.trace(f"[JobResultStore] Stored result for job {job_id[:8]}, status={status}")
def get_and_remove(self, job_id: str) -> Optional[JobResult]:
"""获取并删除任务结果"""
with self._results_lock:
result = self._results.pop(job_id, None)
if result:
logger.debug(f"[JobResultStore] Retrieved and removed result for job {job_id[:8]}")
logger.trace(f"[JobResultStore] Retrieved and removed result for job {job_id[:8]}")
return result
def get_result(self, job_id: str) -> Optional[JobResult]:

View File

@@ -23,7 +23,7 @@ from typing import Optional, Dict, Any, List
from urllib.parse import urlparse
from enum import Enum
from jedi.inference.gradual.typing import TypedDict
from typing_extensions import TypedDict
from unilabos.app.model import JobAddReq
from unilabos.ros.nodes.presets.host_node import HostNode
@@ -154,7 +154,7 @@ class DeviceActionManager:
job_info.set_ready_timeout(10) # 设置10秒超时
self.active_jobs[device_key] = job_info
job_log = format_job_log(job_info.job_id, job_info.task_id, job_info.device_id, job_info.action_name)
logger.info(f"[DeviceActionManager] Job {job_log} can start immediately for {device_key}")
logger.trace(f"[DeviceActionManager] Job {job_log} can start immediately for {device_key}")
return True
def start_job(self, job_id: str) -> bool:
@@ -210,8 +210,9 @@ class DeviceActionManager:
job_info.update_timestamp()
# 从all_jobs中移除已结束的job
del self.all_jobs[job_id]
job_log = format_job_log(job_info.job_id, job_info.task_id, job_info.device_id, job_info.action_name)
logger.info(f"[DeviceActionManager] Job {job_log} ended for {device_key}")
# job_log = format_job_log(job_info.job_id, job_info.task_id, job_info.device_id, job_info.action_name)
# logger.debug(f"[DeviceActionManager] Job {job_log} ended for {device_key}")
pass
else:
job_log = format_job_log(job_info.job_id, job_info.task_id, job_info.device_id, job_info.action_name)
logger.warning(f"[DeviceActionManager] Job {job_log} was not active for {device_key}")
@@ -227,7 +228,7 @@ class DeviceActionManager:
next_job_log = format_job_log(
next_job.job_id, next_job.task_id, next_job.device_id, next_job.action_name
)
logger.info(f"[DeviceActionManager] Next job {next_job_log} can start for {device_key}")
logger.trace(f"[DeviceActionManager] Next job {next_job_log} can start for {device_key}")
return next_job
return None
@@ -268,7 +269,7 @@ class DeviceActionManager:
# 从all_jobs中移除
del self.all_jobs[job_id]
job_log = format_job_log(job_info.job_id, job_info.task_id, job_info.device_id, job_info.action_name)
logger.info(f"[DeviceActionManager] Active job {job_log} cancelled for {device_key}")
logger.trace(f"[DeviceActionManager] Active job {job_log} cancelled for {device_key}")
# 启动下一个任务
if device_key in self.device_queues and self.device_queues[device_key]:
@@ -281,7 +282,7 @@ class DeviceActionManager:
next_job_log = format_job_log(
next_job.job_id, next_job.task_id, next_job.device_id, next_job.action_name
)
logger.info(f"[DeviceActionManager] Next job {next_job_log} can start after cancel")
logger.trace(f"[DeviceActionManager] Next job {next_job_log} can start after cancel")
return True
# 如果是排队中的任务
@@ -295,7 +296,7 @@ class DeviceActionManager:
job_log = format_job_log(
job_info.job_id, job_info.task_id, job_info.device_id, job_info.action_name
)
logger.info(f"[DeviceActionManager] Queued job {job_log} cancelled for {device_key}")
logger.trace(f"[DeviceActionManager] Queued job {job_log} cancelled for {device_key}")
return True
job_log = format_job_log(job_info.job_id, job_info.task_id, job_info.device_id, job_info.action_name)
@@ -494,8 +495,12 @@ class MessageProcessor:
await self._process_message(message_type, message_data)
else:
if message_type.endswith("_material"):
logger.trace(f"[MessageProcessor] 收到一条归属 {data.get('edge_session')} 的旧消息:{data}")
logger.debug(f"[MessageProcessor] 跳过了一条归属 {data.get('edge_session')} 的旧消息: {data.get('action')}")
logger.trace(
f"[MessageProcessor] 收到一条归属 {data.get('edge_session')} 的旧消息{data}"
)
logger.debug(
f"[MessageProcessor] 跳过了一条归属 {data.get('edge_session')} 的旧消息: {data.get('action')}"
)
else:
await self._process_message(message_type, message_data)
except json.JSONDecodeError:
@@ -565,7 +570,7 @@ class MessageProcessor:
async def _process_message(self, message_type: str, message_data: Dict[str, Any]):
"""处理收到的消息"""
logger.debug(f"[MessageProcessor] Processing message: {message_type}")
logger.trace(f"[MessageProcessor] Processing message: {message_type}")
try:
if message_type == "pong":
@@ -637,13 +642,13 @@ class MessageProcessor:
await self._send_action_state_response(
device_id, action_name, task_id, job_id, "query_action_status", True, 0
)
logger.info(f"[MessageProcessor] Job {job_log} can start immediately")
logger.trace(f"[MessageProcessor] Job {job_log} can start immediately")
else:
# 需要排队
await self._send_action_state_response(
device_id, action_name, task_id, job_id, "query_action_status", False, 10
)
logger.info(f"[MessageProcessor] Job {job_log} queued")
logger.trace(f"[MessageProcessor] Job {job_log} queued")
# 通知QueueProcessor有新的队列更新
if self.queue_processor:
@@ -847,9 +852,7 @@ class MessageProcessor:
device_action_groups[key_add] = []
device_action_groups[key_add].append(item["uuid"])
logger.info(
f"[资源同步] 跨站Transfer: {item['uuid'][:8]} from {device_old_id} to {device_id}"
)
logger.info(f"[资源同步] 跨站Transfer: {item['uuid'][:8]} from {device_old_id} to {device_id}")
else:
# 正常update
key = (device_id, "update")
@@ -863,7 +866,9 @@ class MessageProcessor:
device_action_groups[key] = []
device_action_groups[key].append(item["uuid"])
logger.trace(f"[资源同步] 动作 {action} 分组数量: {len(device_action_groups)}, 总数量: {len(resource_uuid_list)}")
logger.trace(
f"[资源同步] 动作 {action} 分组数量: {len(device_action_groups)}, 总数量: {len(resource_uuid_list)}"
)
# 为每个(device_id, action)创建独立的更新线程
for (device_id, actual_action), items in device_action_groups.items():
@@ -911,13 +916,13 @@ class MessageProcessor:
# 发送确认消息
if self.websocket_client:
await self.websocket_client.send_message({
"action": "restart_acknowledged",
"data": {"reason": reason, "delay": delay}
})
await self.websocket_client.send_message(
{"action": "restart_acknowledged", "data": {"reason": reason, "delay": delay}}
)
# 设置全局重启标志
import unilabos.app.main as main_module
main_module._restart_requested = True
main_module._restart_reason = reason
@@ -927,10 +932,12 @@ class MessageProcessor:
# 在新线程中执行清理,避免阻塞当前事件循环
def do_cleanup():
import time
time.sleep(0.5) # 给当前消息处理完成的时间
logger.info(f"[MessageProcessor] Starting cleanup for restart, reason: {reason}")
try:
from unilabos.app.utils import cleanup_for_restart
if cleanup_for_restart():
logger.info("[MessageProcessor] Cleanup successful, main() will restart")
else:
@@ -1128,7 +1135,7 @@ class QueueProcessor:
success = self.message_processor.send_message(message)
job_log = format_job_log(job_info.job_id, job_info.task_id, job_info.device_id, job_info.action_name)
if success:
logger.debug(f"[QueueProcessor] Sent busy/need_more for queued job {job_log}")
logger.trace(f"[QueueProcessor] Sent busy/need_more for queued job {job_log}")
else:
logger.warning(f"[QueueProcessor] Failed to send busy status for job {job_log}")
@@ -1151,7 +1158,7 @@ class QueueProcessor:
job_info.action_name,
)
logger.info(f"[QueueProcessor] Job {job_log} completed with status: {status}")
logger.trace(f"[QueueProcessor] Job {job_log} completed with status: {status}")
# 结束任务,获取下一个可执行的任务
next_job = self.device_manager.end_job(job_id)
@@ -1171,8 +1178,8 @@ class QueueProcessor:
},
}
self.message_processor.send_message(message)
next_job_log = format_job_log(next_job.job_id, next_job.task_id, next_job.device_id, next_job.action_name)
logger.info(f"[QueueProcessor] Notified next job {next_job_log} can start")
# next_job_log = format_job_log(next_job.job_id, next_job.task_id, next_job.device_id, next_job.action_name)
# logger.debug(f"[QueueProcessor] Notified next job {next_job_log} can start")
# 立即触发下一轮状态检查
self.notify_queue_update()
@@ -1314,7 +1321,7 @@ class WebSocketClient(BaseCommunicationClient):
except (KeyError, AttributeError):
logger.warning(f"[WebSocketClient] Failed to remove job {item.job_id} from HostNode status")
logger.info(f"[WebSocketClient] Intercepting final status for job_id: {item.job_id} - {status}")
# logger.debug(f"[WebSocketClient] Intercepting final status for job_id: {item.job_id} - {status}")
# 通知队列处理器job完成包括timeout的job
self.queue_processor.handle_job_completed(item.job_id, status)
@@ -1381,7 +1388,9 @@ class WebSocketClient(BaseCommunicationClient):
if host_node:
# 获取设备信息
for device_id, namespace in host_node.devices_names.items():
device_key = f"{namespace}/{device_id}" if namespace.startswith("/") else f"/{namespace}/{device_id}"
device_key = (
f"{namespace}/{device_id}" if namespace.startswith("/") else f"/{namespace}/{device_id}"
)
is_online = device_key in host_node._online_devices
# 获取设备的动作信息
@@ -1395,14 +1404,16 @@ class WebSocketClient(BaseCommunicationClient):
"action_type": str(type(client).__name__),
}
devices.append({
"device_id": device_id,
"namespace": namespace,
"device_key": device_key,
"is_online": is_online,
"machine_name": host_node.device_machine_names.get(device_id, machine_name),
"actions": actions,
})
devices.append(
{
"device_id": device_id,
"namespace": namespace,
"device_key": device_key,
"is_online": is_online,
"machine_name": host_node.device_machine_names.get(device_id, machine_name),
"actions": actions,
}
)
logger.info(f"[WebSocketClient] Collected {len(devices)} devices for host_ready")
except Exception as e:

View File

@@ -22,6 +22,7 @@ class BasicConfig:
startup_json_path = None # 填写绝对路径
disable_browser = False # 禁止浏览器自动打开
port = 8002 # 本地HTTP服务
check_mode = False # CI 检查模式,用于验证 registry 导入和文件一致性
# 'TRACE', 'DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL'
log_level: Literal["TRACE", "DEBUG", "INFO", "WARNING", "ERROR", "CRITICAL"] = "DEBUG"

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,687 @@
"""
Virtual Workbench Device - 模拟工作台设备
包含:
- 1个机械臂 (每次操作3s, 独占锁)
- 3个加热台 (每次加热10s, 可并行)
工作流程:
1. A1-A5 物料同时启动,竞争机械臂
2. 机械臂将物料移动到空闲加热台
3. 加热完成后机械臂将物料移动到C1-C5
注意:调用来自线程池,使用 threading.Lock 进行同步
"""
import logging
import time
from typing import Dict, Any, Optional
from dataclasses import dataclass
from enum import Enum
from threading import Lock, RLock
from typing_extensions import TypedDict
from unilabos.ros.nodes.base_device_node import BaseROS2DeviceNode
from unilabos.utils.decorator import not_action
# ============ TypedDict 返回类型定义 ============
class MoveToHeatingStationResult(TypedDict):
"""move_to_heating_station 返回类型"""
success: bool
station_id: int
material_id: str
material_number: int
message: str
class StartHeatingResult(TypedDict):
"""start_heating 返回类型"""
success: bool
station_id: int
material_id: str
material_number: int
message: str
class MoveToOutputResult(TypedDict):
"""move_to_output 返回类型"""
success: bool
station_id: int
material_id: str
class PrepareMaterialsResult(TypedDict):
"""prepare_materials 返回类型 - 批量准备物料"""
success: bool
count: int
material_1: int # 物料编号1
material_2: int # 物料编号2
material_3: int # 物料编号3
material_4: int # 物料编号4
material_5: int # 物料编号5
message: str
# ============ 状态枚举 ============
class HeatingStationState(Enum):
"""加热台状态枚举"""
IDLE = "idle" # 空闲
OCCUPIED = "occupied" # 已放置物料,等待加热
HEATING = "heating" # 加热中
COMPLETED = "completed" # 加热完成,等待取走
class ArmState(Enum):
"""机械臂状态枚举"""
IDLE = "idle" # 空闲
BUSY = "busy" # 工作中
@dataclass
class HeatingStation:
"""加热台数据结构"""
station_id: int
state: HeatingStationState = HeatingStationState.IDLE
current_material: Optional[str] = None # 当前物料 (如 "A1", "A2")
material_number: Optional[int] = None # 物料编号 (1-5)
heating_start_time: Optional[float] = None
heating_progress: float = 0.0
class VirtualWorkbench:
"""
Virtual Workbench Device - 虚拟工作台设备
模拟一个包含1个机械臂和3个加热台的工作站
- 机械臂操作耗时3秒同一时间只能执行一个操作
- 加热台加热耗时10秒3个加热台可并行工作
工作流:
1. 物料A1-A5并发启动线程池竞争机械臂使用权
2. 获取机械臂后,查找空闲加热台
3. 机械臂将物料放入加热台,开始加热
4. 加热完成后机械臂将物料移动到目标位置Cn
"""
_ros_node: BaseROS2DeviceNode
# 配置常量
ARM_OPERATION_TIME: float = 3.0 # 机械臂操作时间(秒)
HEATING_TIME: float = 10.0 # 加热时间(秒)
NUM_HEATING_STATIONS: int = 3 # 加热台数量
def __init__(self, device_id: Optional[str] = None, config: Optional[Dict[str, Any]] = None, **kwargs):
# 处理可能的不同调用方式
if device_id is None and "id" in kwargs:
device_id = kwargs.pop("id")
if config is None and "config" in kwargs:
config = kwargs.pop("config")
self.device_id = device_id or "virtual_workbench"
self.config = config or {}
self.logger = logging.getLogger(f"VirtualWorkbench.{self.device_id}")
self.data: Dict[str, Any] = {}
# 从config中获取可配置参数
self.ARM_OPERATION_TIME = float(self.config.get("arm_operation_time", 3.0))
self.HEATING_TIME = float(self.config.get("heating_time", 10.0))
self.NUM_HEATING_STATIONS = int(self.config.get("num_heating_stations", 3))
# 机械臂状态和锁 (使用threading.Lock)
self._arm_lock = Lock()
self._arm_state = ArmState.IDLE
self._arm_current_task: Optional[str] = None
# 加热台状态 (station_id -> HeatingStation) - 立即初始化不依赖initialize()
self._heating_stations: Dict[int, HeatingStation] = {
i: HeatingStation(station_id=i)
for i in range(1, self.NUM_HEATING_STATIONS + 1)
}
self._stations_lock = RLock() # 可重入锁,保护加热台状态
# 任务追踪
self._active_tasks: Dict[str, Dict[str, Any]] = {} # material_id -> task_info
self._tasks_lock = Lock()
# 处理其他kwargs参数
skip_keys = {"arm_operation_time", "heating_time", "num_heating_stations"}
for key, value in kwargs.items():
if key not in skip_keys and not hasattr(self, key):
setattr(self, key, value)
self.logger.info(f"=== 虚拟工作台 {self.device_id} 已创建 ===")
self.logger.info(
f"机械臂操作时间: {self.ARM_OPERATION_TIME}s | "
f"加热时间: {self.HEATING_TIME}s | "
f"加热台数量: {self.NUM_HEATING_STATIONS}"
)
@not_action
def post_init(self, ros_node: BaseROS2DeviceNode):
"""ROS节点初始化后回调"""
self._ros_node = ros_node
@not_action
def initialize(self) -> bool:
"""初始化虚拟工作台"""
self.logger.info(f"初始化虚拟工作台 {self.device_id}")
# 重置加热台状态 (已在__init__中创建这里重置为初始状态)
with self._stations_lock:
for station in self._heating_stations.values():
station.state = HeatingStationState.IDLE
station.current_material = None
station.material_number = None
station.heating_progress = 0.0
# 初始化状态
self.data.update({
"status": "Ready",
"arm_state": ArmState.IDLE.value,
"arm_current_task": None,
"heating_stations": self._get_stations_status(),
"active_tasks_count": 0,
"message": "工作台就绪",
})
self.logger.info(f"工作台初始化完成: {self.NUM_HEATING_STATIONS}个加热台就绪")
return True
@not_action
def cleanup(self) -> bool:
"""清理虚拟工作台"""
self.logger.info(f"清理虚拟工作台 {self.device_id}")
self._arm_state = ArmState.IDLE
self._arm_current_task = None
with self._stations_lock:
self._heating_stations.clear()
with self._tasks_lock:
self._active_tasks.clear()
self.data.update({
"status": "Offline",
"arm_state": ArmState.IDLE.value,
"heating_stations": {},
"message": "工作台已关闭",
})
return True
def _get_stations_status(self) -> Dict[int, Dict[str, Any]]:
"""获取所有加热台状态"""
with self._stations_lock:
return {
station_id: {
"state": station.state.value,
"current_material": station.current_material,
"material_number": station.material_number,
"heating_progress": station.heating_progress,
}
for station_id, station in self._heating_stations.items()
}
def _update_data_status(self, message: Optional[str] = None):
"""更新状态数据"""
self.data.update({
"arm_state": self._arm_state.value,
"arm_current_task": self._arm_current_task,
"heating_stations": self._get_stations_status(),
"active_tasks_count": len(self._active_tasks),
})
if message:
self.data["message"] = message
def _find_available_heating_station(self) -> Optional[int]:
"""查找空闲的加热台
Returns:
空闲加热台ID如果没有则返回None
"""
with self._stations_lock:
for station_id, station in self._heating_stations.items():
if station.state == HeatingStationState.IDLE:
return station_id
return None
def _acquire_arm(self, task_description: str) -> bool:
"""获取机械臂使用权(阻塞直到获取)
Args:
task_description: 任务描述,用于日志
Returns:
是否成功获取
"""
self.logger.info(f"[{task_description}] 等待获取机械臂...")
# 阻塞等待获取锁
self._arm_lock.acquire()
self._arm_state = ArmState.BUSY
self._arm_current_task = task_description
self._update_data_status(f"机械臂执行: {task_description}")
self.logger.info(f"[{task_description}] 成功获取机械臂使用权")
return True
def _release_arm(self):
"""释放机械臂"""
task = self._arm_current_task
self._arm_state = ArmState.IDLE
self._arm_current_task = None
self._arm_lock.release()
self._update_data_status(f"机械臂已释放 (完成: {task})")
self.logger.info(f"机械臂已释放 (完成: {task})")
def prepare_materials(
self,
count: int = 5,
) -> PrepareMaterialsResult:
"""
批量准备物料 - 虚拟起始节点
作为工作流的起始节点,生成指定数量的物料编号供后续节点使用。
输出5个handle (material_1 ~ material_5)分别对应实验1~5。
Args:
count: 待生成的物料数量默认5 (生成 A1-A5)
Returns:
PrepareMaterialsResult: 包含 material_1 ~ material_5 用于传递给 move_to_heating_station
"""
# 生成物料列表 A1 - A{count}
materials = [i for i in range(1, count + 1)]
self.logger.info(
f"[准备物料] 生成 {count} 个物料: "
f"A1-A{count} -> material_1~material_{count}"
)
return {
"success": True,
"count": count,
"material_1": materials[0] if len(materials) > 0 else 0,
"material_2": materials[1] if len(materials) > 1 else 0,
"material_3": materials[2] if len(materials) > 2 else 0,
"material_4": materials[3] if len(materials) > 3 else 0,
"material_5": materials[4] if len(materials) > 4 else 0,
"message": f"已准备 {count} 个物料: A1-A{count}",
}
def move_to_heating_station(
self,
material_number: int,
) -> MoveToHeatingStationResult:
"""
将物料从An位置移动到加热台
多线程并发调用时,会竞争机械臂使用权,并自动查找空闲加热台
Args:
material_number: 物料编号 (1-5)
Returns:
MoveToHeatingStationResult: 包含 station_id, material_number 等用于传递给下一个节点
"""
# 根据物料编号生成物料ID
material_id = f"A{material_number}"
task_desc = f"移动{material_id}到加热台"
self.logger.info(f"[任务] {task_desc} - 开始执行")
# 记录任务
with self._tasks_lock:
self._active_tasks[material_id] = {
"status": "waiting_for_arm",
"start_time": time.time(),
}
try:
# 步骤1: 等待获取机械臂使用权(竞争)
with self._tasks_lock:
self._active_tasks[material_id]["status"] = "waiting_for_arm"
self._acquire_arm(task_desc)
# 步骤2: 查找空闲加热台
with self._tasks_lock:
self._active_tasks[material_id]["status"] = "finding_station"
station_id = None
# 循环等待直到找到空闲加热台
while station_id is None:
station_id = self._find_available_heating_station()
if station_id is None:
self.logger.info(f"[{material_id}] 没有空闲加热台,等待中...")
# 释放机械臂,等待后重试
self._release_arm()
time.sleep(0.5)
self._acquire_arm(task_desc)
# 步骤3: 占用加热台 - 立即标记为OCCUPIED防止其他任务选择同一加热台
with self._stations_lock:
self._heating_stations[station_id].state = HeatingStationState.OCCUPIED
self._heating_stations[station_id].current_material = material_id
self._heating_stations[station_id].material_number = material_number
# 步骤4: 模拟机械臂移动操作 (3秒)
with self._tasks_lock:
self._active_tasks[material_id]["status"] = "arm_moving"
self._active_tasks[material_id]["assigned_station"] = station_id
self.logger.info(f"[{material_id}] 机械臂正在移动到加热台{station_id}...")
time.sleep(self.ARM_OPERATION_TIME)
# 步骤5: 放入加热台完成
self._update_data_status(f"{material_id}已放入加热台{station_id}")
self.logger.info(f"[{material_id}] 已放入加热台{station_id} (用时{self.ARM_OPERATION_TIME}s)")
# 释放机械臂
self._release_arm()
with self._tasks_lock:
self._active_tasks[material_id]["status"] = "placed_on_station"
return {
"success": True,
"station_id": station_id,
"material_id": material_id,
"material_number": material_number,
"message": f"{material_id}已成功移动到加热台{station_id}",
}
except Exception as e:
self.logger.error(f"[{material_id}] 移动失败: {str(e)}")
if self._arm_lock.locked():
self._release_arm()
return {
"success": False,
"station_id": -1,
"material_id": material_id,
"material_number": material_number,
"message": f"移动失败: {str(e)}",
}
def start_heating(
self,
station_id: int,
material_number: int,
) -> StartHeatingResult:
"""
启动指定加热台的加热程序
Args:
station_id: 加热台ID (1-3),从 move_to_heating_station 的 handle 传入
material_number: 物料编号,从 move_to_heating_station 的 handle 传入
Returns:
StartHeatingResult: 包含 station_id, material_number 等用于传递给下一个节点
"""
self.logger.info(f"[加热台{station_id}] 开始加热")
if station_id not in self._heating_stations:
return {
"success": False,
"station_id": station_id,
"material_id": "",
"material_number": material_number,
"message": f"无效的加热台ID: {station_id}",
}
with self._stations_lock:
station = self._heating_stations[station_id]
if station.current_material is None:
return {
"success": False,
"station_id": station_id,
"material_id": "",
"material_number": material_number,
"message": f"加热台{station_id}上没有物料",
}
if station.state == HeatingStationState.HEATING:
return {
"success": False,
"station_id": station_id,
"material_id": station.current_material,
"material_number": material_number,
"message": f"加热台{station_id}已经在加热中",
}
material_id = station.current_material
# 开始加热
station.state = HeatingStationState.HEATING
station.heating_start_time = time.time()
station.heating_progress = 0.0
with self._tasks_lock:
if material_id in self._active_tasks:
self._active_tasks[material_id]["status"] = "heating"
self._update_data_status(f"加热台{station_id}开始加热{material_id}")
# 模拟加热过程 (10秒)
start_time = time.time()
while True:
elapsed = time.time() - start_time
progress = min(100.0, (elapsed / self.HEATING_TIME) * 100)
with self._stations_lock:
self._heating_stations[station_id].heating_progress = progress
self._update_data_status(f"加热台{station_id}加热中: {progress:.1f}%")
if elapsed >= self.HEATING_TIME:
break
time.sleep(1.0)
# 加热完成
with self._stations_lock:
self._heating_stations[station_id].state = HeatingStationState.COMPLETED
self._heating_stations[station_id].heating_progress = 100.0
with self._tasks_lock:
if material_id in self._active_tasks:
self._active_tasks[material_id]["status"] = "heating_completed"
self._update_data_status(f"加热台{station_id}加热完成")
self.logger.info(f"[加热台{station_id}] {material_id}加热完成 (用时{self.HEATING_TIME}s)")
return {
"success": True,
"station_id": station_id,
"material_id": material_id,
"material_number": material_number,
"message": f"加热台{station_id}加热完成",
}
def move_to_output(
self,
station_id: int,
material_number: int,
) -> MoveToOutputResult:
"""
将物料从加热台移动到输出位置Cn
Args:
station_id: 加热台ID (1-3),从 start_heating 的 handle 传入
material_number: 物料编号,从 start_heating 的 handle 传入,用于确定输出位置 Cn
Returns:
MoveToOutputResult: 包含执行结果
"""
output_number = material_number # 物料编号决定输出位置
if station_id not in self._heating_stations:
return {
"success": False,
"station_id": station_id,
"material_id": "",
"output_position": f"C{output_number}",
"message": f"无效的加热台ID: {station_id}",
}
with self._stations_lock:
station = self._heating_stations[station_id]
material_id = station.current_material
if material_id is None:
return {
"success": False,
"station_id": station_id,
"material_id": "",
"output_position": f"C{output_number}",
"message": f"加热台{station_id}上没有物料",
}
if station.state != HeatingStationState.COMPLETED:
return {
"success": False,
"station_id": station_id,
"material_id": material_id,
"output_position": f"C{output_number}",
"message": f"加热台{station_id}尚未完成加热 (当前状态: {station.state.value})",
}
output_position = f"C{output_number}"
task_desc = f"从加热台{station_id}移动{material_id}{output_position}"
self.logger.info(f"[任务] {task_desc}")
try:
with self._tasks_lock:
if material_id in self._active_tasks:
self._active_tasks[material_id]["status"] = "waiting_for_arm_output"
# 获取机械臂
self._acquire_arm(task_desc)
with self._tasks_lock:
if material_id in self._active_tasks:
self._active_tasks[material_id]["status"] = "arm_moving_to_output"
# 模拟机械臂操作 (3秒)
self.logger.info(f"[{material_id}] 机械臂正在从加热台{station_id}取出并移动到{output_position}...")
time.sleep(self.ARM_OPERATION_TIME)
# 清空加热台
with self._stations_lock:
self._heating_stations[station_id].state = HeatingStationState.IDLE
self._heating_stations[station_id].current_material = None
self._heating_stations[station_id].material_number = None
self._heating_stations[station_id].heating_progress = 0.0
self._heating_stations[station_id].heating_start_time = None
# 释放机械臂
self._release_arm()
# 任务完成
with self._tasks_lock:
if material_id in self._active_tasks:
self._active_tasks[material_id]["status"] = "completed"
self._active_tasks[material_id]["end_time"] = time.time()
self._update_data_status(f"{material_id}已移动到{output_position}")
self.logger.info(f"[{material_id}] 已成功移动到{output_position} (用时{self.ARM_OPERATION_TIME}s)")
return {
"success": True,
"station_id": station_id,
"material_id": material_id,
"output_position": output_position,
"message": f"{material_id}已成功移动到{output_position}",
}
except Exception as e:
self.logger.error(f"移动到输出位置失败: {str(e)}")
if self._arm_lock.locked():
self._release_arm()
return {
"success": False,
"station_id": station_id,
"material_id": "",
"output_position": output_position,
"message": f"移动失败: {str(e)}",
}
# ============ 状态属性 ============
@property
def status(self) -> str:
return self.data.get("status", "Unknown")
@property
def arm_state(self) -> str:
return self._arm_state.value
@property
def arm_current_task(self) -> str:
return self._arm_current_task or ""
@property
def heating_station_1_state(self) -> str:
with self._stations_lock:
station = self._heating_stations.get(1)
return station.state.value if station else "unknown"
@property
def heating_station_1_material(self) -> str:
with self._stations_lock:
station = self._heating_stations.get(1)
return station.current_material or "" if station else ""
@property
def heating_station_1_progress(self) -> float:
with self._stations_lock:
station = self._heating_stations.get(1)
return station.heating_progress if station else 0.0
@property
def heating_station_2_state(self) -> str:
with self._stations_lock:
station = self._heating_stations.get(2)
return station.state.value if station else "unknown"
@property
def heating_station_2_material(self) -> str:
with self._stations_lock:
station = self._heating_stations.get(2)
return station.current_material or "" if station else ""
@property
def heating_station_2_progress(self) -> float:
with self._stations_lock:
station = self._heating_stations.get(2)
return station.heating_progress if station else 0.0
@property
def heating_station_3_state(self) -> str:
with self._stations_lock:
station = self._heating_stations.get(3)
return station.state.value if station else "unknown"
@property
def heating_station_3_material(self) -> str:
with self._stations_lock:
station = self._heating_stations.get(3)
return station.current_material or "" if station else ""
@property
def heating_station_3_progress(self) -> float:
with self._stations_lock:
station = self._heating_stations.get(3)
return station.heating_progress if station else 0.0
@property
def active_tasks_count(self) -> int:
with self._tasks_lock:
return len(self._active_tasks)
@property
def message(self) -> str:
return self.data.get("message", "")

View File

@@ -1,589 +0,0 @@
workstation.bioyond_dispensing_station:
category:
- workstation
- bioyond
class:
action_value_mappings:
auto-batch_create_90_10_vial_feeding_tasks:
feedback: {}
goal: {}
goal_default:
delay_time: null
hold_m_name: null
liquid_material_name: NMP
speed: null
temperature: null
titration: null
handles: {}
placeholder_keys: {}
result: {}
schema:
description: ''
properties:
feedback: {}
goal:
properties:
delay_time:
type: string
hold_m_name:
type: string
liquid_material_name:
default: NMP
type: string
speed:
type: string
temperature:
type: string
titration:
type: string
required:
- titration
type: object
result: {}
required:
- goal
title: batch_create_90_10_vial_feeding_tasks参数
type: object
type: UniLabJsonCommand
auto-batch_create_diamine_solution_tasks:
feedback: {}
goal: {}
goal_default:
delay_time: null
liquid_material_name: NMP
solutions: null
speed: null
temperature: null
handles: {}
placeholder_keys: {}
result: {}
schema:
description: ''
properties:
feedback: {}
goal:
properties:
delay_time:
type: string
liquid_material_name:
default: NMP
type: string
solutions:
type: string
speed:
type: string
temperature:
type: string
required:
- solutions
type: object
result: {}
required:
- goal
title: batch_create_diamine_solution_tasks参数
type: object
type: UniLabJsonCommand
auto-brief_step_parameters:
feedback: {}
goal: {}
goal_default:
data: null
handles: {}
placeholder_keys: {}
result: {}
schema:
description: ''
properties:
feedback: {}
goal:
properties:
data:
type: object
required:
- data
type: object
result: {}
required:
- goal
title: brief_step_parameters参数
type: object
type: UniLabJsonCommand
auto-compute_experiment_design:
feedback: {}
goal: {}
goal_default:
m_tot: '70'
ratio: null
titration_percent: '0.03'
wt_percent: '0.25'
handles: {}
placeholder_keys: {}
result: {}
schema:
description: ''
properties:
feedback: {}
goal:
properties:
m_tot:
default: '70'
type: string
ratio:
type: object
titration_percent:
default: '0.03'
type: string
wt_percent:
default: '0.25'
type: string
required:
- ratio
type: object
result:
properties:
feeding_order:
items: {}
title: Feeding Order
type: array
return_info:
title: Return Info
type: string
solutions:
items: {}
title: Solutions
type: array
solvents:
additionalProperties: true
title: Solvents
type: object
titration:
additionalProperties: true
title: Titration
type: object
required:
- solutions
- titration
- solvents
- feeding_order
- return_info
title: ComputeExperimentDesignReturn
type: object
required:
- goal
title: compute_experiment_design参数
type: object
type: UniLabJsonCommand
auto-process_order_finish_report:
feedback: {}
goal: {}
goal_default:
report_request: null
used_materials: null
handles: {}
placeholder_keys: {}
result: {}
schema:
description: ''
properties:
feedback: {}
goal:
properties:
report_request:
type: string
used_materials:
type: string
required:
- report_request
- used_materials
type: object
result: {}
required:
- goal
title: process_order_finish_report参数
type: object
type: UniLabJsonCommand
auto-project_order_report:
feedback: {}
goal: {}
goal_default:
order_id: null
handles: {}
placeholder_keys: {}
result: {}
schema:
description: ''
properties:
feedback: {}
goal:
properties:
order_id:
type: string
required:
- order_id
type: object
result: {}
required:
- goal
title: project_order_report参数
type: object
type: UniLabJsonCommand
auto-query_resource_by_name:
feedback: {}
goal: {}
goal_default:
material_name: null
handles: {}
placeholder_keys: {}
result: {}
schema:
description: ''
properties:
feedback: {}
goal:
properties:
material_name:
type: string
required:
- material_name
type: object
result: {}
required:
- goal
title: query_resource_by_name参数
type: object
type: UniLabJsonCommand
auto-transfer_materials_to_reaction_station:
feedback: {}
goal: {}
goal_default:
target_device_id: null
transfer_groups: null
handles: {}
placeholder_keys: {}
result: {}
schema:
description: ''
properties:
feedback: {}
goal:
properties:
target_device_id:
type: string
transfer_groups:
type: array
required:
- target_device_id
- transfer_groups
type: object
result: {}
required:
- goal
title: transfer_materials_to_reaction_station参数
type: object
type: UniLabJsonCommand
auto-wait_for_multiple_orders_and_get_reports:
feedback: {}
goal: {}
goal_default:
batch_create_result: null
check_interval: 10
timeout: 7200
handles: {}
placeholder_keys: {}
result: {}
schema:
description: ''
properties:
feedback: {}
goal:
properties:
batch_create_result:
type: string
check_interval:
default: 10
type: integer
timeout:
default: 7200
type: integer
required: []
type: object
result: {}
required:
- goal
title: wait_for_multiple_orders_and_get_reports参数
type: object
type: UniLabJsonCommand
auto-workflow_sample_locations:
feedback: {}
goal: {}
goal_default:
workflow_id: null
handles: {}
placeholder_keys: {}
result: {}
schema:
description: ''
properties:
feedback: {}
goal:
properties:
workflow_id:
type: string
required:
- workflow_id
type: object
result: {}
required:
- goal
title: workflow_sample_locations参数
type: object
type: UniLabJsonCommand
create_90_10_vial_feeding_task:
feedback: {}
goal:
delay_time: delay_time
hold_m_name: hold_m_name
order_name: order_name
percent_10_1_assign_material_name: percent_10_1_assign_material_name
percent_10_1_liquid_material_name: percent_10_1_liquid_material_name
percent_10_1_target_weigh: percent_10_1_target_weigh
percent_10_1_volume: percent_10_1_volume
percent_10_2_assign_material_name: percent_10_2_assign_material_name
percent_10_2_liquid_material_name: percent_10_2_liquid_material_name
percent_10_2_target_weigh: percent_10_2_target_weigh
percent_10_2_volume: percent_10_2_volume
percent_10_3_assign_material_name: percent_10_3_assign_material_name
percent_10_3_liquid_material_name: percent_10_3_liquid_material_name
percent_10_3_target_weigh: percent_10_3_target_weigh
percent_10_3_volume: percent_10_3_volume
percent_90_1_assign_material_name: percent_90_1_assign_material_name
percent_90_1_target_weigh: percent_90_1_target_weigh
percent_90_2_assign_material_name: percent_90_2_assign_material_name
percent_90_2_target_weigh: percent_90_2_target_weigh
percent_90_3_assign_material_name: percent_90_3_assign_material_name
percent_90_3_target_weigh: percent_90_3_target_weigh
speed: speed
temperature: temperature
goal_default:
delay_time: ''
hold_m_name: ''
order_name: ''
percent_10_1_assign_material_name: ''
percent_10_1_liquid_material_name: ''
percent_10_1_target_weigh: ''
percent_10_1_volume: ''
percent_10_2_assign_material_name: ''
percent_10_2_liquid_material_name: ''
percent_10_2_target_weigh: ''
percent_10_2_volume: ''
percent_10_3_assign_material_name: ''
percent_10_3_liquid_material_name: ''
percent_10_3_target_weigh: ''
percent_10_3_volume: ''
percent_90_1_assign_material_name: ''
percent_90_1_target_weigh: ''
percent_90_2_assign_material_name: ''
percent_90_2_target_weigh: ''
percent_90_3_assign_material_name: ''
percent_90_3_target_weigh: ''
speed: ''
temperature: ''
handles: {}
result:
return_info: return_info
schema:
description: ''
properties:
feedback:
properties: {}
required: []
title: DispenStationVialFeed_Feedback
type: object
goal:
properties:
delay_time:
type: string
hold_m_name:
type: string
order_name:
type: string
percent_10_1_assign_material_name:
type: string
percent_10_1_liquid_material_name:
type: string
percent_10_1_target_weigh:
type: string
percent_10_1_volume:
type: string
percent_10_2_assign_material_name:
type: string
percent_10_2_liquid_material_name:
type: string
percent_10_2_target_weigh:
type: string
percent_10_2_volume:
type: string
percent_10_3_assign_material_name:
type: string
percent_10_3_liquid_material_name:
type: string
percent_10_3_target_weigh:
type: string
percent_10_3_volume:
type: string
percent_90_1_assign_material_name:
type: string
percent_90_1_target_weigh:
type: string
percent_90_2_assign_material_name:
type: string
percent_90_2_target_weigh:
type: string
percent_90_3_assign_material_name:
type: string
percent_90_3_target_weigh:
type: string
speed:
type: string
temperature:
type: string
required:
- order_name
- percent_90_1_assign_material_name
- percent_90_1_target_weigh
- percent_90_2_assign_material_name
- percent_90_2_target_weigh
- percent_90_3_assign_material_name
- percent_90_3_target_weigh
- percent_10_1_assign_material_name
- percent_10_1_target_weigh
- percent_10_1_volume
- percent_10_1_liquid_material_name
- percent_10_2_assign_material_name
- percent_10_2_target_weigh
- percent_10_2_volume
- percent_10_2_liquid_material_name
- percent_10_3_assign_material_name
- percent_10_3_target_weigh
- percent_10_3_volume
- percent_10_3_liquid_material_name
- speed
- temperature
- delay_time
- hold_m_name
title: DispenStationVialFeed_Goal
type: object
result:
properties:
return_info:
type: string
required:
- return_info
title: DispenStationVialFeed_Result
type: object
required:
- goal
title: DispenStationVialFeed
type: object
type: DispenStationVialFeed
create_diamine_solution_task:
feedback: {}
goal:
delay_time: delay_time
hold_m_name: hold_m_name
liquid_material_name: liquid_material_name
material_name: material_name
order_name: order_name
speed: speed
target_weigh: target_weigh
temperature: temperature
volume: volume
goal_default:
delay_time: ''
hold_m_name: ''
liquid_material_name: ''
material_name: ''
order_name: ''
speed: ''
target_weigh: ''
temperature: ''
volume: ''
handles: {}
result:
return_info: return_info
schema:
description: ''
properties:
feedback:
properties: {}
required: []
title: DispenStationSolnPrep_Feedback
type: object
goal:
properties:
delay_time:
type: string
hold_m_name:
type: string
liquid_material_name:
type: string
material_name:
type: string
order_name:
type: string
speed:
type: string
target_weigh:
type: string
temperature:
type: string
volume:
type: string
required:
- order_name
- material_name
- target_weigh
- volume
- liquid_material_name
- speed
- temperature
- delay_time
- hold_m_name
title: DispenStationSolnPrep_Goal
type: object
result:
properties:
return_info:
type: string
required:
- return_info
title: DispenStationSolnPrep_Result
type: object
required:
- goal
title: DispenStationSolnPrep
type: object
type: DispenStationSolnPrep
module: unilabos.devices.workstation.bioyond_studio.dispensing_station:BioyondDispensingStation
status_types: {}
type: python
config_info: []
description: ''
handles: []
icon: ''
init_param_schema:
config:
properties:
config:
type: string
deck:
type: string
required:
- config
- deck
type: object
data:
properties: {}
required: []
type: object
version: 1.0.0

File diff suppressed because it is too large Load Diff

View File

@@ -5,6 +5,135 @@ bioyond_dispensing_station:
- bioyond_dispensing_station
class:
action_value_mappings:
auto-brief_step_parameters:
feedback: {}
goal: {}
goal_default:
data: null
handles: {}
placeholder_keys: {}
result: {}
schema:
description: ''
properties:
feedback: {}
goal:
properties:
data:
type: object
required:
- data
type: object
result: {}
required:
- goal
title: brief_step_parameters参数
type: object
type: UniLabJsonCommand
auto-process_order_finish_report:
feedback: {}
goal: {}
goal_default:
report_request: null
used_materials: null
handles: {}
placeholder_keys: {}
result: {}
schema:
description: ''
properties:
feedback: {}
goal:
properties:
report_request:
type: string
used_materials:
type: string
required:
- report_request
- used_materials
type: object
result: {}
required:
- goal
title: process_order_finish_report参数
type: object
type: UniLabJsonCommand
auto-project_order_report:
feedback: {}
goal: {}
goal_default:
order_id: null
handles: {}
placeholder_keys: {}
result: {}
schema:
description: ''
properties:
feedback: {}
goal:
properties:
order_id:
type: string
required:
- order_id
type: object
result: {}
required:
- goal
title: project_order_report参数
type: object
type: UniLabJsonCommand
auto-query_resource_by_name:
feedback: {}
goal: {}
goal_default:
material_name: null
handles: {}
placeholder_keys: {}
result: {}
schema:
description: ''
properties:
feedback: {}
goal:
properties:
material_name:
type: string
required:
- material_name
type: object
result: {}
required:
- goal
title: query_resource_by_name参数
type: object
type: UniLabJsonCommand
auto-workflow_sample_locations:
feedback: {}
goal: {}
goal_default:
workflow_id: null
handles: {}
placeholder_keys: {}
result: {}
schema:
description: ''
properties:
feedback: {}
goal:
properties:
workflow_id:
type: string
required:
- workflow_id
type: object
result: {}
required:
- goal
title: workflow_sample_locations参数
type: object
type: UniLabJsonCommand
batch_create_90_10_vial_feeding_tasks:
feedback: {}
goal:

View File

@@ -405,7 +405,7 @@ coincellassemblyworkstation_device:
goal:
properties:
bottle_num:
type: integer
type: string
required:
- bottle_num
type: object

View File

@@ -49,32 +49,7 @@ opcua_example:
title: load_config参数
type: object
type: UniLabJsonCommand
auto-post_init:
feedback: {}
goal: {}
goal_default:
ros_node: null
handles: {}
placeholder_keys: {}
result: {}
schema:
description: ''
properties:
feedback: {}
goal:
properties:
ros_node:
type: string
required:
- ros_node
type: object
result: {}
required:
- goal
title: post_init参数
type: object
type: UniLabJsonCommand
auto-print_cache_stats:
auto-refresh_node_values:
feedback: {}
goal: {}
goal_default: {}
@@ -92,32 +67,7 @@ opcua_example:
result: {}
required:
- goal
title: print_cache_stats参数
type: object
type: UniLabJsonCommand
auto-read_node:
feedback: {}
goal: {}
goal_default:
node_name: null
handles: {}
placeholder_keys: {}
result: {}
schema:
description: ''
properties:
feedback: {}
goal:
properties:
node_name:
type: string
required:
- node_name
type: object
result: {}
required:
- goal
title: read_node参数
title: refresh_node_values参数
type: object
type: UniLabJsonCommand
auto-set_node_value:
@@ -149,9 +99,50 @@ opcua_example:
title: set_node_value参数
type: object
type: UniLabJsonCommand
auto-start_node_refresh:
feedback: {}
goal: {}
goal_default: {}
handles: {}
placeholder_keys: {}
result: {}
schema:
description: ''
properties:
feedback: {}
goal:
properties: {}
required: []
type: object
result: {}
required:
- goal
title: start_node_refresh参数
type: object
type: UniLabJsonCommand
auto-stop_node_refresh:
feedback: {}
goal: {}
goal_default: {}
handles: {}
placeholder_keys: {}
result: {}
schema:
description: ''
properties:
feedback: {}
goal:
properties: {}
required: []
type: object
result: {}
required:
- goal
title: stop_node_refresh参数
type: object
type: UniLabJsonCommand
module: unilabos.device_comms.opcua_client.client:OpcUaClient
status_types:
cache_stats: dict
node_value: String
type: python
config_info: []
@@ -161,23 +152,15 @@ opcua_example:
init_param_schema:
config:
properties:
cache_timeout:
default: 5.0
type: number
config_path:
type: string
deck:
type: string
password:
type: string
subscription_interval:
default: 500
type: integer
refresh_interval:
default: 1.0
type: number
url:
type: string
use_subscription:
default: true
type: boolean
username:
type: string
required:
@@ -185,12 +168,9 @@ opcua_example:
type: object
data:
properties:
cache_stats:
type: object
node_value:
type: string
required:
- node_value
- cache_stats
type: object
version: 1.0.0

View File

@@ -58,6 +58,313 @@ reaction_station.bioyond:
title: add_time_constraint参数
type: object
type: UniLabJsonCommand
auto-clear_workflows:
feedback: {}
goal: {}
goal_default: {}
handles: {}
placeholder_keys: {}
result: {}
schema:
description: ''
properties:
feedback: {}
goal:
properties: {}
required: []
type: object
result: {}
required:
- goal
title: clear_workflows参数
type: object
type: UniLabJsonCommand
auto-create_order:
feedback: {}
goal: {}
goal_default:
json_str: null
handles: {}
placeholder_keys: {}
result: {}
schema:
description: ''
properties:
feedback: {}
goal:
properties:
json_str:
type: string
required:
- json_str
type: object
result: {}
required:
- goal
title: create_order参数
type: object
type: UniLabJsonCommand
auto-hard_delete_merged_workflows:
feedback: {}
goal: {}
goal_default:
workflow_ids: null
handles: {}
placeholder_keys: {}
result: {}
schema:
description: ''
properties:
feedback: {}
goal:
properties:
workflow_ids:
items:
type: string
type: array
required:
- workflow_ids
type: object
result: {}
required:
- goal
title: hard_delete_merged_workflows参数
type: object
type: UniLabJsonCommand
auto-merge_workflow_with_parameters:
feedback: {}
goal: {}
goal_default:
json_str: null
handles: {}
placeholder_keys: {}
result: {}
schema:
description: ''
properties:
feedback: {}
goal:
properties:
json_str:
type: string
required:
- json_str
type: object
result: {}
required:
- goal
title: merge_workflow_with_parameters参数
type: object
type: UniLabJsonCommand
auto-process_temperature_cutoff_report:
feedback: {}
goal: {}
goal_default:
report_request: null
handles: {}
placeholder_keys: {}
result: {}
schema:
description: ''
properties:
feedback: {}
goal:
properties:
report_request:
type: string
required:
- report_request
type: object
result: {}
required:
- goal
title: process_temperature_cutoff_report参数
type: object
type: UniLabJsonCommand
auto-process_web_workflows:
feedback: {}
goal: {}
goal_default:
web_workflow_json: null
handles: {}
placeholder_keys: {}
result: {}
schema:
description: ''
properties:
feedback: {}
goal:
properties:
web_workflow_json:
type: string
required:
- web_workflow_json
type: object
result: {}
required:
- goal
title: process_web_workflows参数
type: object
type: UniLabJsonCommand
auto-set_reactor_temperature:
feedback: {}
goal: {}
goal_default:
reactor_id: null
temperature: null
handles: {}
placeholder_keys: {}
result: {}
schema:
description: ''
properties:
feedback: {}
goal:
properties:
reactor_id:
type: integer
temperature:
type: number
required:
- reactor_id
- temperature
type: object
result: {}
required:
- goal
title: set_reactor_temperature参数
type: object
type: UniLabJsonCommand
auto-skip_titration_steps:
feedback: {}
goal: {}
goal_default:
preintake_id: null
handles: {}
placeholder_keys: {}
result: {}
schema:
description: ''
properties:
feedback: {}
goal:
properties:
preintake_id:
type: string
required:
- preintake_id
type: object
result: {}
required:
- goal
title: skip_titration_steps参数
type: object
type: UniLabJsonCommand
auto-sync_workflow_sequence_from_bioyond:
feedback: {}
goal: {}
goal_default: {}
handles: {}
placeholder_keys: {}
result: {}
schema:
description: ''
properties:
feedback: {}
goal:
properties: {}
required: []
type: object
result: {}
required:
- goal
title: sync_workflow_sequence_from_bioyond参数
type: object
type: UniLabJsonCommand
auto-wait_for_multiple_orders_and_get_reports:
feedback: {}
goal: {}
goal_default:
batch_create_result: null
check_interval: 10
timeout: 7200
handles: {}
placeholder_keys: {}
result: {}
schema:
description: ''
properties:
feedback: {}
goal:
properties:
batch_create_result:
type: string
check_interval:
default: 10
type: integer
timeout:
default: 7200
type: integer
required: []
type: object
result: {}
required:
- goal
title: wait_for_multiple_orders_and_get_reports参数
type: object
type: UniLabJsonCommand
auto-workflow_sequence:
feedback: {}
goal: {}
goal_default:
value: null
handles: {}
placeholder_keys: {}
result: {}
schema:
description: ''
properties:
feedback: {}
goal:
properties:
value:
items:
type: string
type: array
required:
- value
type: object
result: {}
required:
- goal
title: workflow_sequence参数
type: object
type: UniLabJsonCommand
auto-workflow_step_query:
feedback: {}
goal: {}
goal_default:
workflow_id: null
handles: {}
placeholder_keys: {}
result: {}
schema:
description: ''
properties:
feedback: {}
goal:
properties:
workflow_id:
type: string
required:
- workflow_id
type: object
result: {}
required:
- goal
title: workflow_step_query参数
type: object
type: UniLabJsonCommand
clean_all_server_workflows:
feedback: {}
goal: {}
@@ -674,17 +981,7 @@ reaction_station.bioyond:
module: unilabos.devices.workstation.bioyond_studio.reaction_station.reaction_station:BioyondReactionStation
protocol_type: []
status_types:
average_viscosity: Float64
force: Float64
in_temperature: Float64
out_temperature: Float64
pt100_temperature: Float64
sensor_average_temperature: Float64
setting_temperature: Float64
speed: Float64
target_temperature: Float64
viscosity: Float64
workflow_sequence: String
workflow_sequence: str
type: python
config_info: []
description: Bioyond反应站
@@ -704,9 +1001,7 @@ reaction_station.bioyond:
data:
properties:
workflow_sequence:
items:
type: string
type: array
type: string
required:
- workflow_sequence
type: object
@@ -716,19 +1011,34 @@ reaction_station.reactor:
- reactor
- reaction_station_bioyond
class:
action_value_mappings: {}
action_value_mappings:
auto-update_metrics:
feedback: {}
goal: {}
goal_default:
payload: null
handles: {}
placeholder_keys: {}
result: {}
schema:
description: ''
properties:
feedback: {}
goal:
properties:
payload:
type: object
required:
- payload
type: object
result: {}
required:
- goal
title: update_metrics参数
type: object
type: UniLabJsonCommand
module: unilabos.devices.workstation.bioyond_studio.reaction_station.reaction_station:BioyondReactor
status_types:
average_viscosity: Float64
force: Float64
in_temperature: Float64
out_temperature: Float64
pt100_temperature: Float64
sensor_average_temperature: Float64
setting_temperature: Float64
speed: Float64
target_temperature: Float64
viscosity: Float64
status_types: {}
type: python
config_info: []
description: 反应站子设备-反应器

File diff suppressed because it is too large Load Diff

View File

@@ -71,6 +71,20 @@ class Registry:
from unilabos.app.web.utils.action_utils import get_yaml_from_goal_type
# 获取 HostNode 类的增强信息,用于自动生成 action schema
host_node_enhanced_info = get_enhanced_class_info(
"unilabos.ros.nodes.presets.host_node:HostNode", use_dynamic=True
)
# 为 test_latency 生成 schema保留原有 description
test_latency_method_info = host_node_enhanced_info.get("action_methods", {}).get("test_latency", {})
test_latency_schema = self._generate_unilab_json_command_schema(
test_latency_method_info.get("args", []),
"test_latency",
test_latency_method_info.get("return_annotation"),
)
test_latency_schema["description"] = "用于测试延迟的动作,返回延迟时间和时间差。"
self.device_type_registry.update(
{
"host_node": {
@@ -152,14 +166,19 @@ class Registry:
},
},
"test_latency": {
"type": self.EmptyIn,
"type": (
"UniLabJsonCommandAsync"
if test_latency_method_info.get("is_async", False)
else "UniLabJsonCommand"
),
"goal": {},
"feedback": {},
"result": {},
"schema": ros_action_to_json_schema(
self.EmptyIn, "用于测试延迟的动作,返回延迟时间和时间差。"
),
"goal_default": {},
"schema": test_latency_schema,
"goal_default": {
arg["name"]: arg["default"]
for arg in test_latency_method_info.get("args", [])
},
"handles": {},
},
"auto-test_resource": {
@@ -479,7 +498,11 @@ class Registry:
return status_schema
def _generate_unilab_json_command_schema(
self, method_args: List[Dict[str, Any]], method_name: str, return_annotation: Any = None
self,
method_args: List[Dict[str, Any]],
method_name: str,
return_annotation: Any = None,
previous_schema: Dict[str, Any] | None = None,
) -> Dict[str, Any]:
"""
根据UniLabJsonCommand方法信息生成JSON Schema暂不支持嵌套类型
@@ -488,6 +511,7 @@ class Registry:
method_args: 方法信息字典包含args等
method_name: 方法名称
return_annotation: 返回类型注解用于生成result schema仅支持TypedDict
previous_schema: 之前的 schema用于保留 goal/feedback/result 下一级字段的 description
Returns:
JSON Schema格式的参数schema
@@ -521,7 +545,7 @@ class Registry:
if return_annotation is not None and self._is_typed_dict(return_annotation):
result_schema = self._generate_typed_dict_result_schema(return_annotation)
return {
final_schema = {
"title": f"{method_name}参数",
"description": f"",
"type": "object",
@@ -529,6 +553,40 @@ class Registry:
"required": ["goal"],
}
# 保留之前 schema 中 goal/feedback/result 下一级字段的 description
if previous_schema:
self._preserve_field_descriptions(final_schema, previous_schema)
return final_schema
def _preserve_field_descriptions(self, new_schema: Dict[str, Any], previous_schema: Dict[str, Any]) -> None:
"""
保留之前 schema 中 goal/feedback/result 下一级字段的 description 和 title
Args:
new_schema: 新生成的 schema会被修改
previous_schema: 之前的 schema
"""
for section in ["goal", "feedback", "result"]:
new_section = new_schema.get("properties", {}).get(section, {})
prev_section = previous_schema.get("properties", {}).get(section, {})
if not new_section or not prev_section:
continue
new_props = new_section.get("properties", {})
prev_props = prev_section.get("properties", {})
for field_name, field_schema in new_props.items():
if field_name in prev_props:
prev_field = prev_props[field_name]
# 保留字段的 description
if "description" in prev_field and prev_field["description"]:
field_schema["description"] = prev_field["description"]
# 保留字段的 title用户自定义的中文名
if "title" in prev_field and prev_field["title"]:
field_schema["title"] = prev_field["title"]
def _is_typed_dict(self, annotation: Any) -> bool:
"""
检查类型注解是否是TypedDict
@@ -697,13 +755,10 @@ class Registry:
sorted(device_config["class"]["status_types"].items())
)
if complete_registry:
# 保存原有的description信息
old_descriptions = {}
# 保存原有的 action 配置(用于保留 schema 的 description 和 handles 等)
old_action_configs = {}
for action_name, action_config in device_config["class"]["action_value_mappings"].items():
if "description" in action_config.get("schema", {}):
description = action_config["schema"]["description"]
if len(description):
old_descriptions[action_name] = action_config["schema"]["description"]
old_action_configs[action_name] = action_config
device_config["class"]["action_value_mappings"] = {
k: v
@@ -719,10 +774,15 @@ class Registry:
"feedback": {},
"result": {},
"schema": self._generate_unilab_json_command_schema(
v["args"], k, v.get("return_annotation")
v["args"],
k,
v.get("return_annotation"),
# 传入旧的 schema 以保留字段 description
old_action_configs.get(f"auto-{k}", {}).get("schema"),
),
"goal_default": {i["name"]: i["default"] for i in v["args"]},
"handles": [],
# 保留原有的 handles 配置
"handles": old_action_configs.get(f"auto-{k}", {}).get("handles", []),
"placeholder_keys": {
i["name"]: (
"unilabos_resources"
@@ -746,12 +806,14 @@ class Registry:
if k not in device_config["class"]["action_value_mappings"]
}
)
# 恢复原有的description信息auto开头的不修改
for action_name, description in old_descriptions.items():
# 恢复原有的 description 信息(auto- 开头的动作
for action_name, old_config in old_action_configs.items():
if action_name in device_config["class"]["action_value_mappings"]: # 有一些会被删除
device_config["class"]["action_value_mappings"][action_name]["schema"][
"description"
] = description
old_schema = old_config.get("schema", {})
if "description" in old_schema and old_schema["description"]:
device_config["class"]["action_value_mappings"][action_name]["schema"][
"description"
] = old_schema["description"]
device_config["init_param_schema"] = {}
device_config["init_param_schema"]["config"] = self._generate_unilab_json_command_schema(
enhanced_info["init_params"], "__init__"

View File

@@ -770,13 +770,16 @@ def ros_message_to_json_schema(msg_class: Any, field_name: str) -> Dict[str, Any
return schema
def ros_action_to_json_schema(action_class: Any, description="") -> Dict[str, Any]:
def ros_action_to_json_schema(
action_class: Any, description="", previous_schema: Optional[Dict[str, Any]] = None
) -> Dict[str, Any]:
"""
将 ROS Action 类转换为 JSON Schema
Args:
action_class: ROS Action 类
description: 描述
previous_schema: 之前的 schema用于保留 goal/feedback/result 下一级字段的 description
Returns:
完整的 JSON Schema 定义
@@ -810,9 +813,44 @@ def ros_action_to_json_schema(action_class: Any, description="") -> Dict[str, An
"required": ["goal"],
}
# 保留之前 schema 中 goal/feedback/result 下一级字段的 description
if previous_schema:
_preserve_field_descriptions(schema, previous_schema)
return schema
def _preserve_field_descriptions(
new_schema: Dict[str, Any], previous_schema: Dict[str, Any]
) -> None:
"""
保留之前 schema 中 goal/feedback/result 下一级字段的 description 和 title
Args:
new_schema: 新生成的 schema会被修改
previous_schema: 之前的 schema
"""
for section in ["goal", "feedback", "result"]:
new_section = new_schema.get("properties", {}).get(section, {})
prev_section = previous_schema.get("properties", {}).get(section, {})
if not new_section or not prev_section:
continue
new_props = new_section.get("properties", {})
prev_props = prev_section.get("properties", {})
for field_name, field_schema in new_props.items():
if field_name in prev_props:
prev_field = prev_props[field_name]
# 保留字段的 description
if "description" in prev_field and prev_field["description"]:
field_schema["description"] = prev_field["description"]
# 保留字段的 title用户自定义的中文名
if "title" in prev_field and prev_field["title"]:
field_schema["title"] = prev_field["title"]
def convert_ros_action_to_jsonschema(
action_name_or_type: Union[str, Type], output_file: Optional[str] = None, format: str = "json"
) -> Dict[str, Any]:

View File

@@ -49,7 +49,6 @@ from unilabos.resources.resource_tracker import (
ResourceTreeInstance,
ResourceDictInstance,
)
from unilabos.ros.x.rclpyx import get_event_loop
from unilabos.ros.utils.driver_creator import WorkstationNodeCreator, PyLabRobotCreator, DeviceClassCreator
from rclpy.task import Task, Future
from unilabos.utils.import_manager import default_manager
@@ -185,7 +184,7 @@ class PropertyPublisher:
f"创建发布者 {name} 失败,可能由于注册表有误,类型: {msg_type},错误: {ex}\n{traceback.format_exc()}"
)
self.timer = node.create_timer(self.timer_period, self.publish_property)
self.__loop = get_event_loop()
self.__loop = ROS2DeviceNode.get_asyncio_loop()
str_msg_type = str(msg_type)[8:-2]
self.node.lab_logger().trace(f"发布属性: {name}, 类型: {str_msg_type}, 周期: {initial_period}秒, QoS: {qos}")
@@ -392,9 +391,12 @@ class BaseROS2DeviceNode(Node, Generic[T]):
parent_resource = self.resource_tracker.figure_resource(
{"name": bind_parent_id}
)
for r in rts.root_nodes:
# noinspection PyUnresolvedReferences
r.res_content.parent_uuid = parent_resource.unilabos_uuid
for r in rts.root_nodes:
# noinspection PyUnresolvedReferences
r.res_content.parent_uuid = parent_resource.unilabos_uuid
else:
for r in rts.root_nodes:
r.res_content.parent_uuid = self.uuid
if len(LIQUID_INPUT_SLOT) and LIQUID_INPUT_SLOT[0] == -1 and len(rts.root_nodes) == 1 and isinstance(rts.root_nodes[0], RegularContainer):
# noinspection PyTypeChecker
@@ -1754,6 +1756,15 @@ class ROS2DeviceNode:
它不继承设备类,而是通过代理模式访问设备类的属性和方法。
"""
# 类变量,用于循环管理
_asyncio_loop = None
_asyncio_loop_running = False
_asyncio_loop_thread = None
@classmethod
def get_asyncio_loop(cls):
return cls._asyncio_loop
@staticmethod
async def safe_task_wrapper(trace_callback, func, **kwargs):
try:
@@ -1830,6 +1841,11 @@ class ROS2DeviceNode:
print_publish: 是否打印发布信息
driver_is_ros:
"""
# 在初始化时检查循环状态
if ROS2DeviceNode._asyncio_loop_running and ROS2DeviceNode._asyncio_loop_thread is not None:
pass
elif ROS2DeviceNode._asyncio_loop_thread is None:
self._start_loop()
# 保存设备类是否支持异步上下文
self._has_async_context = hasattr(driver_class, "__aenter__") and hasattr(driver_class, "__aexit__")
@@ -1921,6 +1937,17 @@ class ROS2DeviceNode:
except Exception as e:
self._ros_node.lab_logger().error(f"设备后初始化失败: {e}")
def _start_loop(self):
def run_event_loop():
loop = asyncio.new_event_loop()
ROS2DeviceNode._asyncio_loop = loop
asyncio.set_event_loop(loop)
loop.run_forever()
ROS2DeviceNode._asyncio_loop_thread = threading.Thread(target=run_event_loop, daemon=True, name="ROS2DeviceNode")
ROS2DeviceNode._asyncio_loop_thread.start()
logger.info(f"循环线程已启动")
class DeviceInfoType(TypedDict):
id: str

View File

@@ -5,7 +5,8 @@ import threading
import time
import traceback
import uuid
from typing import TYPE_CHECKING, Optional, Dict, Any, List, ClassVar, Set, TypedDict, Union
from typing import TYPE_CHECKING, Optional, Dict, Any, List, ClassVar, Set, Union
from typing_extensions import TypedDict
from action_msgs.msg import GoalStatus
from geometry_msgs.msg import Point
@@ -62,6 +63,18 @@ class TestResourceReturn(TypedDict):
devices: List[DeviceSlot]
class TestLatencyReturn(TypedDict):
"""test_latency方法的返回值类型"""
avg_rtt_ms: float
avg_time_diff_ms: float
max_time_error_ms: float
task_delay_ms: float
raw_delay_ms: float
test_count: int
status: str
class HostNode(BaseROS2DeviceNode):
"""
主机节点类,负责管理设备、资源和控制器
@@ -853,8 +866,13 @@ class HostNode(BaseROS2DeviceNode):
# 适配后端的一些额外处理
return_value = return_info.get("return_value")
if isinstance(return_value, dict):
unilabos_samples = return_info.get("unilabos_samples")
if isinstance(unilabos_samples, list):
unilabos_samples = return_value.pop("unilabos_samples", None)
if isinstance(unilabos_samples, list) and unilabos_samples:
self.lab_logger().info(
f"[Host Node] Job {job_id[:8]} returned {len(unilabos_samples)} sample(s): "
f"{[s.get('name', s.get('id', 'unknown')) if isinstance(s, dict) else str(s)[:20] for s in unilabos_samples[:5]]}"
f"{'...' if len(unilabos_samples) > 5 else ''}"
)
return_info["unilabos_samples"] = unilabos_samples
suc = return_info.get("suc", False)
if not suc:
@@ -881,7 +899,7 @@ class HostNode(BaseROS2DeviceNode):
# 清理 _goals 中的记录
if job_id in self._goals:
del self._goals[job_id]
self.lab_logger().debug(f"[Host Node] Removed goal {job_id[:8]} from _goals")
self.lab_logger().trace(f"[Host Node] Removed goal {job_id[:8]} from _goals")
# 存储结果供 HTTP API 查询
try:
@@ -1326,10 +1344,20 @@ class HostNode(BaseROS2DeviceNode):
self.lab_logger().debug(f"[Host Node-Resource] List parameters: {request}")
return response
def test_latency(self):
def test_latency(self) -> TestLatencyReturn:
"""
测试网络延迟的action实现
通过5次ping-pong机制校对时间误差并计算实际延迟
Returns:
TestLatencyReturn: 包含延迟测试结果的字典,包括:
- avg_rtt_ms: 平均往返时间(毫秒)
- avg_time_diff_ms: 平均时间差(毫秒)
- max_time_error_ms: 最大时间误差(毫秒)
- task_delay_ms: 实际任务延迟(毫秒),-1表示无法计算
- raw_delay_ms: 原始时间差(毫秒),-1表示无法计算
- test_count: 有效测试次数
- status: 测试状态,"success"表示成功,"all_timeout"表示全部超时
"""
import uuid as uuid_module
@@ -1392,7 +1420,15 @@ class HostNode(BaseROS2DeviceNode):
if not ping_results:
self.lab_logger().error("❌ 所有ping-pong测试都失败了")
return {"status": "all_timeout"}
return {
"avg_rtt_ms": -1.0,
"avg_time_diff_ms": -1.0,
"max_time_error_ms": -1.0,
"task_delay_ms": -1.0,
"raw_delay_ms": -1.0,
"test_count": 0,
"status": "all_timeout",
}
# 统计分析
rtts = [r["rtt_ms"] for r in ping_results]
@@ -1400,7 +1436,7 @@ class HostNode(BaseROS2DeviceNode):
avg_rtt_ms = sum(rtts) / len(rtts)
avg_time_diff_ms = sum(time_diffs) / len(time_diffs)
max_time_diff_error_ms = max(abs(min(time_diffs)), abs(max(time_diffs)))
max_time_diff_error_ms: float = max(abs(min(time_diffs)), abs(max(time_diffs)))
self.lab_logger().info("-" * 50)
self.lab_logger().info("[测试统计]")
@@ -1440,7 +1476,7 @@ class HostNode(BaseROS2DeviceNode):
self.lab_logger().info("=" * 60)
return {
res: TestLatencyReturn = {
"avg_rtt_ms": avg_rtt_ms,
"avg_time_diff_ms": avg_time_diff_ms,
"max_time_error_ms": max_time_diff_error_ms,
@@ -1451,9 +1487,14 @@ class HostNode(BaseROS2DeviceNode):
"test_count": len(ping_results),
"status": "success",
}
return res
def test_resource(
self, resource: ResourceSlot = None, resources: List[ResourceSlot] = None, device: DeviceSlot = None, devices: List[DeviceSlot] = None
self,
resource: ResourceSlot = None,
resources: List[ResourceSlot] = None,
device: DeviceSlot = None,
devices: List[DeviceSlot] = None,
) -> TestResourceReturn:
if resources is None:
resources = []
@@ -1514,7 +1555,9 @@ class HostNode(BaseROS2DeviceNode):
# 构建服务地址
srv_address = f"/srv{namespace}/s2c_resource_tree"
self.lab_logger().trace(f"[Host Node-Resource] Host -> {device_id} ResourceTree {action} operation started -------")
self.lab_logger().trace(
f"[Host Node-Resource] Host -> {device_id} ResourceTree {action} operation started -------"
)
# 创建服务客户端
sclient = self.create_client(SerialCommand, srv_address)
@@ -1549,7 +1592,9 @@ class HostNode(BaseROS2DeviceNode):
time.sleep(0.05)
response = future.result()
self.lab_logger().trace(f"[Host Node-Resource] Host -> {device_id} ResourceTree {action} operation completed -------")
self.lab_logger().trace(
f"[Host Node-Resource] Host -> {device_id} ResourceTree {action} operation completed -------"
)
return True
except Exception as e:

View File

@@ -1,182 +0,0 @@
import asyncio
from asyncio import events
import threading
import rclpy
from rclpy.impl.implementation_singleton import rclpy_implementation as _rclpy
from rclpy.executors import await_or_execute, Executor
from rclpy.action import ActionClient, ActionServer
from rclpy.action.server import ServerGoalHandle, GoalResponse, GoalInfo, GoalStatus
from std_msgs.msg import String
from action_tutorials_interfaces.action import Fibonacci
loop = None
def get_event_loop():
global loop
return loop
async def default_handle_accepted_callback_async(goal_handle):
"""Execute the goal."""
await goal_handle.execute()
class ServerGoalHandleX(ServerGoalHandle):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
async def execute(self, execute_callback=None):
# It's possible that there has been a request to cancel the goal prior to executing.
# In this case we want to avoid the illegal state transition to EXECUTING
# but still call the users execute callback to let them handle canceling the goal.
if not self.is_cancel_requested:
self._update_state(_rclpy.GoalEvent.EXECUTE)
await self._action_server.notify_execute_async(self, execute_callback)
class ActionServerX(ActionServer):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.register_handle_accepted_callback(default_handle_accepted_callback_async)
async def _execute_goal_request(self, request_header_and_message):
request_header, goal_request = request_header_and_message
goal_uuid = goal_request.goal_id
goal_info = GoalInfo()
goal_info.goal_id = goal_uuid
self._node.get_logger().debug('New goal request with ID: {0}'.format(goal_uuid.uuid))
# Check if goal ID is already being tracked by this action server
with self._lock:
goal_id_exists = self._handle.goal_exists(goal_info)
accepted = False
if not goal_id_exists:
# Call user goal callback
response = await await_or_execute(self._goal_callback, goal_request.goal)
if not isinstance(response, GoalResponse):
self._node.get_logger().warning(
'Goal request callback did not return a GoalResponse type. Rejecting goal.')
else:
accepted = GoalResponse.ACCEPT == response
if accepted:
# Stamp time of acceptance
goal_info.stamp = self._node.get_clock().now().to_msg()
# Create a goal handle
try:
with self._lock:
goal_handle = ServerGoalHandleX(self, goal_info, goal_request.goal)
except RuntimeError as e:
self._node.get_logger().error(
'Failed to accept new goal with ID {0}: {1}'.format(goal_uuid.uuid, e))
accepted = False
else:
self._goal_handles[bytes(goal_uuid.uuid)] = goal_handle
# Send response
response_msg = self._action_type.Impl.SendGoalService.Response()
response_msg.accepted = accepted
response_msg.stamp = goal_info.stamp
self._handle.send_goal_response(request_header, response_msg)
if not accepted:
self._node.get_logger().debug('New goal rejected: {0}'.format(goal_uuid.uuid))
return
self._node.get_logger().debug('New goal accepted: {0}'.format(goal_uuid.uuid))
# Provide the user a reference to the goal handle
# await await_or_execute(self._handle_accepted_callback, goal_handle)
asyncio.create_task(self._handle_accepted_callback(goal_handle))
async def notify_execute_async(self, goal_handle, execute_callback):
# Use provided callback, defaulting to a previously registered callback
if execute_callback is None:
if self._execute_callback is None:
return
execute_callback = self._execute_callback
# Schedule user callback for execution
self._node.get_logger().info(f"{events.get_running_loop()}")
asyncio.create_task(self._execute_goal(execute_callback, goal_handle))
# loop = asyncio.new_event_loop()
# asyncio.set_event_loop(loop)
# task = loop.create_task(self._execute_goal(execute_callback, goal_handle))
# await task
class ActionClientX(ActionClient):
feedback_queue = asyncio.Queue()
async def feedback_cb(self, msg):
await self.feedback_queue.put(msg)
async def send_goal_async(self, goal_msg):
goal_future = super().send_goal_async(
goal_msg,
feedback_callback=self.feedback_cb
)
client_goal_handle = await asyncio.ensure_future(goal_future)
if not client_goal_handle.accepted:
raise Exception("Goal rejected.")
result_future = client_goal_handle.get_result_async()
while True:
feedback_future = asyncio.ensure_future(self.feedback_queue.get())
tasks = [result_future, feedback_future]
await asyncio.wait(tasks, return_when=asyncio.FIRST_COMPLETED)
if result_future.done():
result = result_future.result().result
yield (None, result)
break
else:
feedback = feedback_future.result().feedback
yield (feedback, None)
async def main(node):
print('Node started.')
action_client = ActionClientX(node, Fibonacci, 'fibonacci')
goal_msg = Fibonacci.Goal()
goal_msg.order = 10
async for (feedback, result) in action_client.send_goal_async(goal_msg):
if feedback:
print(f'Feedback: {feedback}')
else:
print(f'Result: {result}')
print('Finished.')
async def ros_loop_node(node):
while rclpy.ok():
rclpy.spin_once(node, timeout_sec=0)
await asyncio.sleep(1e-4)
async def ros_loop(executor: Executor):
while rclpy.ok():
executor.spin_once(timeout_sec=0)
await asyncio.sleep(1e-4)
def run_event_loop():
global loop
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
loop.run_forever()
def run_event_loop_in_thread():
thread = threading.Thread(target=run_event_loop, args=())
thread.start()
if __name__ == "__main__":
rclpy.init()
node = rclpy.create_node('async_subscriber')
future = asyncio.wait([ros_loop(node), main()])
asyncio.get_event_loop().run_until_complete(future)

View File

@@ -0,0 +1,28 @@
{
"nodes": [
{
"id": "workbench_1",
"name": "虚拟工作台",
"children": [],
"parent": null,
"type": "device",
"class": "virtual_workbench",
"position": {
"x": 400,
"y": 300,
"z": 0
},
"config": {
"arm_operation_time": 3.0,
"heating_time": 10.0,
"num_heating_stations": 3
},
"data": {
"status": "Ready",
"arm_state": "idle",
"message": "工作台就绪"
}
}
],
"links": []
}

View File

@@ -182,3 +182,49 @@ def get_all_subscriptions(instance) -> list:
except Exception:
pass
return subscriptions
def not_action(func: F) -> F:
"""
标记方法为非动作的装饰器
用于装饰 driver 类中的方法,使其在 complete_registry 时不被识别为动作。
适用于辅助方法、内部工具方法等不应暴露为设备动作的公共方法。
Example:
class MyDriver:
@not_action
def helper_method(self):
# 这个方法不会被注册为动作
pass
def actual_action(self, param: str):
# 这个方法会被注册为动作
self.helper_method()
Note:
- 可以与其他装饰器组合使用,@not_action 应放在最外层
- 仅影响 complete_registry 的动作识别,不影响方法的正常调用
"""
@wraps(func)
def wrapper(*args, **kwargs):
return func(*args, **kwargs)
# 在函数上附加标记
wrapper._is_not_action = True # type: ignore[attr-defined]
return wrapper # type: ignore[return-value]
def is_not_action(func) -> bool:
"""
检查函数是否被标记为非动作
Args:
func: 被检查的函数
Returns:
如果函数被 @not_action 装饰则返回 True否则返回 False
"""
return getattr(func, "_is_not_action", False)

View File

@@ -28,6 +28,7 @@ __all__ = [
from ast import Constant
from unilabos.utils import logger
from unilabos.utils.decorator import is_not_action
class ImportManager:
@@ -275,6 +276,9 @@ class ImportManager:
method_info = self._analyze_method_signature(method)
result["status_methods"][actual_name] = method_info
elif not name.startswith("_"):
# 检查是否被 @not_action 装饰器标记
if is_not_action(method):
continue
# 其他非_开头的方法归类为action
method_info = self._analyze_method_signature(method)
result["action_methods"][name] = method_info
@@ -330,6 +334,9 @@ class ImportManager:
if actual_name not in result["status_methods"]:
result["status_methods"][actual_name] = method_info
else:
# 检查是否被 @not_action 装饰器标记
if self._is_not_action_method(node):
continue
# 其他非_开头的方法归类为action
result["action_methods"][method_name] = method_info
return result
@@ -450,6 +457,13 @@ class ImportManager:
return True
return False
def _is_not_action_method(self, node: ast.FunctionDef) -> bool:
"""检查是否是@not_action装饰的方法"""
for decorator in node.decorator_list:
if isinstance(decorator, ast.Name) and decorator.id == "not_action":
return True
return False
def _get_property_name_from_setter(self, node: ast.FunctionDef) -> str:
"""从setter装饰器中获取属性名"""
for decorator in node.decorator_list:

View File

@@ -191,21 +191,6 @@ def configure_logger(loglevel=None, working_dir=None):
# 添加处理器到根日志记录器
root_logger.addHandler(console_handler)
# 降低第三方库的日志级别,避免过多输出
# pymodbus 库的日志太详细,设置为 WARNING
logging.getLogger('pymodbus').setLevel(logging.WARNING)
logging.getLogger('pymodbus.logging').setLevel(logging.WARNING)
logging.getLogger('pymodbus.logging.base').setLevel(logging.WARNING)
logging.getLogger('pymodbus.logging.decoders').setLevel(logging.WARNING)
# websockets 库的日志输出较多,设置为 WARNING
logging.getLogger('websockets').setLevel(logging.WARNING)
logging.getLogger('websockets.client').setLevel(logging.WARNING)
logging.getLogger('websockets.server').setLevel(logging.WARNING)
# ROS 节点的状态更新日志过于频繁,设置为 INFO
logging.getLogger('unilabos.ros.nodes.presets.host_node').setLevel(logging.INFO)
# 如果指定了工作目录,添加文件处理器
if working_dir is not None:

View File

@@ -1,7 +1,11 @@
import psutil
import pywinauto
from pywinauto_recorder import UIApplication
from pywinauto_recorder.player import UIPath, click, focus_on_application, exists, find, get_wrapper_path
try:
from pywinauto_recorder import UIApplication
from pywinauto_recorder.player import UIPath, click, focus_on_application, exists, find, get_wrapper_path
except ImportError:
print("未安装pywinauto_recorder部分功能无法使用安装时注意enum")
pass
from pywinauto.controls.uiawrapper import UIAWrapper
from pywinauto.application import WindowSpecification
from pywinauto import findbestmatch

View File

@@ -2,7 +2,7 @@
<?xml-model href="http://download.ros.org/schema/package_format3.xsd" schematypens="http://www.w3.org/2001/XMLSchema"?>
<package format="3">
<name>unilabos_msgs</name>
<version>0.10.15</version>
<version>0.10.16</version>
<description>ROS2 Messages package for unilabos devices</description>
<maintainer email="changjh@pku.edu.cn">Junhan Chang</maintainer>
<maintainer email="18435084+Xuwznln@users.noreply.github.com">Xuwznln</maintainer>