Compare commits

..

50 Commits

Author SHA1 Message Date
ZiWei
cb9840be59 refactor: Merge tipbox storage left and right into single warehouse 2026-01-15 17:53:56 +08:00
ZiWei
b64de9b509 refactor: Split tipbox storage into left and right warehouses 2026-01-15 17:04:20 +08:00
ZiWei
6f7adce998 Merge branch 'pr/169' into workstation_YB_merge_dev_ready_260113 2026-01-15 14:26:42 +08:00
ZiWei
af15ae0b3e Refactor: Use instance attributes for action names and workflow step IDs 2026-01-15 13:35:36 +08:00
Andy6M
0597029a84 feat: Integrate material search logic and cleanup deprecated files
- Update coin_cell_assembly.py with material search dialog handling
- Update YB_warehouses.py with latest warehouse configurations
- Remove outdated documentation and test data files
2026-01-15 12:17:29 +08:00
ZiWei
5a5f4e037b fix: 修正 reaction_station 目录名拼写错误 2026-01-14 18:51:01 +08:00
ZiWei
a90613f3de refactor: Move config from module to instance initialization 2026-01-14 18:43:59 +08:00
ZiWei
2b04457037 fix: WareHouse 的不可哈希类型错误,优化父节点去重逻辑 2026-01-14 18:42:14 +08:00
ZiWei
9a06ef3836 Refactor module paths for Bioyond devices in YAML configuration files
- Updated the module path for BioyondDispensingStation in bioyond_dispensing_station.yaml to reflect the new directory structure.
- Updated the module path for BioyondReactionStation and BioyondReactor in reaction_station_bioyond.yaml to align with the revised organization of the codebase.
2026-01-14 12:44:19 +08:00
ZiWei
ce3f2b33c5 Merge branch 'dev' into pr/169 2026-01-14 11:44:04 +08:00
Andy6M
78729ef86c feat(workstation): update bioyond config migration and coin cell material search logic
- Migrate bioyond_cell config to JSON structure and remove global variable dependencies
- Implement material search confirmation dialog auto-handling
- Add documentation: 20260113_物料搜寻确认弹窗自动处理功能.md and 20260113_配置迁移修改总结.md
2026-01-14 09:55:25 +08:00
Xuwznln
6f143b068b Add debug log 2026-01-13 17:51:18 +08:00
ZiWei
03423e4791 feat:增强材料缓存更新逻辑,支持处理返回数据中的详细信息 2026-01-13 11:24:10 +08:00
ZiWei
9fc6781406 Merge branch 'dev' into pr/169 2026-01-12 11:37:50 +08:00
ZiWei
9753ef02c3 fix:修复Bottle类的序列化和反序列化方法 2026-01-12 10:50:28 +08:00
ZiWei
6a0614c0c9 Merge branch 'dev' into pr/169 and fix conflicts 2026-01-11 18:14:19 +08:00
ZiWei
a25e8f6853 feat: 添加清空服务端所有非核心工作流功能 2026-01-08 11:40:42 +08:00
ZiWei
5c249e66a2 feat: 动态获取工作流步骤ID,优化工作流配置 2025-12-31 14:42:43 +08:00
ZiWei
93ac095e0a fix:refresh_material_cache 2025-12-29 22:19:26 +08:00
ZiWei
288d9fea91 fix:Change the material unit from μL to mL 2025-12-29 21:33:38 +08:00
ZiWei
81b28cef71 Merge branch 'dev' into pr/169
# Conflicts:
#	unilabos/device_comms/opcua_client/client.py
#	unilabos/device_comms/opcua_client/node/uniopcua.py
#	unilabos/registry/devices/post_process_station.yaml
2025-12-25 13:55:22 +08:00
ZiWei
d57e5ffdae Refactor bioyond_dispensing_station and reaction_station_bioyond YAML configurations
- Removed redundant action value mappings from bioyond_dispensing_station.
- Updated goal properties in bioyond_dispensing_station to use enums for target_stack and other parameters.
- Changed data types for end_point and start_point in reaction_station_bioyond to use string enums (Start, End).
- Simplified descriptions and updated measurement units from μL to mL where applicable.
- Removed unused commands from reaction_station_bioyond to streamline the configuration.
2025-12-25 12:19:37 +08:00
ZiWei
d5e0d76311 fix: 修复添加物料时数据格式错误 2025-12-25 11:23:59 +08:00
ZiWei
beaa1d7213 feat: 添加任务状态事件发布功能,监控并报告任务运行、超时、完成和错误状态 2025-12-25 09:58:03 +08:00
ZiWei
1e5f6b0c04 feat:添加实验报告简化功能,去除冗余信息并保留关键信息 2025-12-24 11:28:40 +08:00
ZiWei
5ae89d8607 fix:更新奔曜错误处理报送为物料变更报送,调整日志记录和响应消息 2025-12-23 10:55:25 +08:00
ZiWei
74d0ea3379 fix:在添加物料时处理字符串和字典返回值,确保正确更新缓存 2025-12-22 14:37:04 +08:00
ZiWei
440c9965fd fix:自动更新物料缓存功能,添加物料时更新缓存并在删除时移除缓存项 2025-12-18 11:40:59 +08:00
ZiWei
9cac852bc3 添加时间约束功能及相关配置 2025-12-16 13:03:01 +08:00
ZiWei
de662a42aa Merge branch 'dev' into hrdev 2025-12-16 10:03:13 +08:00
ZiWei
632f9b90d1 feat: remove commented workflow synchronization from reaction_station.py. 2025-12-10 17:43:16 +08:00
ZiWei
d7c970d244 fix:同步工作流序列 2025-12-10 17:32:11 +08:00
ZiWei
2c69e663a7 fix:兼容 BioyondReactionStation 中 workflow_sequence 被重写为 property 2025-12-10 11:10:57 +08:00
ZiWei
f03ff96ae4 Merge branch 'dev' into hrdev 2025-12-08 20:25:34 +08:00
ZiWei
c68903ed83 Merge branch 'dev' into hrdev 2025-12-08 16:37:26 +08:00
ZiWei
8efbbbe72a 添加从 Bioyond 系统自动同步工作流序列的功能,并更新相关配置 2025-12-08 16:37:01 +08:00
ZiWei
4a23b05abc 添加调度器启动功能,合并物料参数配置,优化物料参数处理逻辑 2025-12-01 14:00:58 +08:00
ZiWei
6d8884a2c7 Merge remote-tracking branch 'upstream/dev' into hrdev 2025-11-28 22:51:09 +08:00
ZiWei
c0e7a69553 feat(registry): 新增后处理站的设备配置文件
添加后处理站的YAML配置文件,包含动作映射、状态类型和设备描述
2025-11-26 19:59:30 +08:00
ZiWei
fb6ee79577 feat(opcua): 增强节点ID解析兼容性和数据类型处理
改进节点ID解析逻辑以支持多种格式,包括字符串和数字标识符
添加数据类型转换处理,确保写入值时类型匹配
优化错误提示信息,便于调试节点连接问题
2025-11-26 19:57:06 +08:00
ZiWei
dbe129caab feat(bioyond): 优化调度器启动功能,添加异常处理并更新相关配置 2025-11-26 16:32:10 +08:00
ZiWei
7250995891 feat(bioyond): 添加调度器启动功能,支持任务队列执行并处理异常 2025-11-26 16:09:16 +08:00
ZiWei
68eddbdffd feat(bioyond): 调整反应器位置配置,统一坐标格式 2025-11-23 18:10:56 +08:00
ZiWei
32bd234176 feat(bioyond): 添加设置反应器温度功能,支持温度范围和异常处理 2025-11-23 17:16:51 +08:00
ZiWei
3d62e8bf6c feat(bioyond): 优化任务创建流程,确保无论成功与否都清理任务队列以避免重复累积 2025-11-23 13:27:31 +08:00
ZiWei
efec1dd501 Merge remote-tracking branch 'upstream/dev' into hrdev 2025-11-21 11:39:17 +08:00
ZiWei
c16756ddb3 feat(bioyond): 更新仓库布局和尺寸,支持竖向排列的测量小瓶和试剂存放堆栈 2025-11-21 11:33:58 +08:00
ZiWei
daf41871a1 feat(bioyond): 添加测量小瓶配置,支持新设备参数 2025-11-21 11:33:32 +08:00
ZiWei
6b0b28becf feat(bioyond): 添加测量小瓶功能,支持基本参数配置 2025-11-21 11:32:53 +08:00
ZiWei
0f7366f3ee feat(bioyond): 添加计算实验设计功能,支持化合物配比和滴定比例参数 2025-11-20 12:10:18 +08:00
56 changed files with 7867 additions and 4353 deletions

View File

@@ -1,61 +0,0 @@
# unilabos: Production package (depends on unilabos-env + pip unilabos)
# For production deployment
package:
name: unilabos
version: 0.10.16
source:
path: ../../unilabos
target_directory: unilabos
build:
python:
entry_points:
- unilab = unilabos.app.main:main
script:
- set PIP_NO_INDEX=
- if: win
then:
- copy %RECIPE_DIR%\..\..\MANIFEST.in %SRC_DIR%
- copy %RECIPE_DIR%\..\..\setup.cfg %SRC_DIR%
- copy %RECIPE_DIR%\..\..\setup.py %SRC_DIR%
- pip install %SRC_DIR%
- if: unix
then:
- cp $RECIPE_DIR/../../MANIFEST.in $SRC_DIR
- cp $RECIPE_DIR/../../setup.cfg $SRC_DIR
- cp $RECIPE_DIR/../../setup.py $SRC_DIR
- uv pip install $SRC_DIR
requirements:
host:
- python ==3.11.14
- pip
- setuptools
- zstd
- zstandard
run:
- zstd
- zstandard
- networkx
- typing_extensions
- websockets
- opentrons_shared_data
- pint
- fastapi
- jinja2
- requests
- uvicorn
- opcua
- pyserial
- pandas
- pymodbus
- matplotlib
- pylibftdi
- uni-lab::unilabos-env ==0.10.16
about:
repository: https://github.com/deepmodeling/Uni-Lab-OS
license: GPL-3.0-only
description: "UniLabOS - Production package with minimal ROS2 dependencies"

View File

@@ -1,39 +0,0 @@
# unilabos-env: conda environment dependencies (ROS2 + conda packages)
package:
name: unilabos-env
version: 0.10.16
build:
noarch: generic
requirements:
run:
# Python
- zstd
- zstandard
- conda-forge::python ==3.11.14
- conda-forge::opencv
# ROS2 dependencies (from ci-check.yml)
- robostack-staging::ros-humble-ros-core
- robostack-staging::ros-humble-action-msgs
- robostack-staging::ros-humble-std-msgs
- robostack-staging::ros-humble-geometry-msgs
- robostack-staging::ros-humble-control-msgs
- robostack-staging::ros-humble-nav2-msgs
- robostack-staging::ros-humble-cv-bridge
- robostack-staging::ros-humble-vision-opencv
- robostack-staging::ros-humble-tf-transformations
- robostack-staging::ros-humble-moveit-msgs
- robostack-staging::ros-humble-tf2-ros
- robostack-staging::ros-humble-tf2-ros-py
- conda-forge::transforms3d
- conda-forge::uv
# UniLabOS custom messages
- uni-lab::ros-humble-unilabos-msgs
about:
repository: https://github.com/deepmodeling/Uni-Lab-OS
license: GPL-3.0-only
description: "UniLabOS Environment - ROS2 and conda dependencies (for developers: pip install -e .)"

View File

@@ -1,42 +0,0 @@
# unilabos-full: Full package with all features
# Depends on unilabos + complete ROS2 desktop + dev tools
package:
name: unilabos-full
version: 0.10.16
build:
noarch: generic
requirements:
run:
# Base unilabos package (includes unilabos-env)
- uni-lab::unilabos ==0.10.16
# Documentation tools
- sphinx
- sphinx_rtd_theme
# Web UI
- gradio
- flask
# Interactive development
- ipython
- jupyter
- jupyros
- colcon-common-extensions
# ROS2 full desktop (includes rviz2, gazebo, etc.)
- robostack-staging::ros-humble-desktop-full
# Navigation and motion control
- ros-humble-navigation2
- ros-humble-ros2-control
- ros-humble-robot-state-publisher
- ros-humble-joint-state-publisher
# MoveIt motion planning
- ros-humble-moveit
- ros-humble-moveit-servo
# Simulation
- ros-humble-simulation
about:
repository: https://github.com/deepmodeling/Uni-Lab-OS
license: GPL-3.0-only
description: "UniLabOS Full - Complete package with ROS2 Desktop, MoveIt, Navigation2, Gazebo, Jupyter"

91
.conda/recipe.yaml Normal file
View File

@@ -0,0 +1,91 @@
package:
name: unilabos
version: 0.10.15
source:
path: ../unilabos
target_directory: unilabos
build:
python:
entry_points:
- unilab = unilabos.app.main:main
script:
- set PIP_NO_INDEX=
- if: win
then:
- copy %RECIPE_DIR%\..\MANIFEST.in %SRC_DIR%
- copy %RECIPE_DIR%\..\setup.cfg %SRC_DIR%
- copy %RECIPE_DIR%\..\setup.py %SRC_DIR%
- call %PYTHON% -m pip install %SRC_DIR%
- if: unix
then:
- cp $RECIPE_DIR/../MANIFEST.in $SRC_DIR
- cp $RECIPE_DIR/../setup.cfg $SRC_DIR
- cp $RECIPE_DIR/../setup.py $SRC_DIR
- $PYTHON -m pip install $SRC_DIR
requirements:
host:
- python ==3.11.11
- pip
- setuptools
- zstd
- zstandard
run:
- conda-forge::python ==3.11.11
- compilers
- cmake
- zstd
- zstandard
- ninja
- if: unix
then:
- make
- sphinx
- sphinx_rtd_theme
- numpy
- scipy
- pandas
- networkx
- matplotlib
- pint
- pyserial
- pyusb
- pylibftdi
- pymodbus
- python-can
- pyvisa
- opencv
- pydantic
- fastapi
- uvicorn
- gradio
- flask
- websockets
- ipython
- jupyter
- jupyros
- colcon-common-extensions
- robostack-staging::ros-humble-desktop-full
- robostack-staging::ros-humble-control-msgs
- robostack-staging::ros-humble-sensor-msgs
- robostack-staging::ros-humble-trajectory-msgs
- ros-humble-navigation2
- ros-humble-ros2-control
- ros-humble-robot-state-publisher
- ros-humble-joint-state-publisher
- ros-humble-rosbridge-server
- ros-humble-cv-bridge
- ros-humble-tf2
- ros-humble-moveit
- ros-humble-moveit-servo
- ros-humble-simulation
- ros-humble-tf-transformations
- transforms3d
- uni-lab::ros-humble-unilabos-msgs
about:
repository: https://github.com/deepmodeling/Uni-Lab-OS
license: GPL-3.0-only
description: "Uni-Lab-OS"

View File

@@ -0,0 +1,9 @@
@echo off
setlocal enabledelayedexpansion
REM upgrade pip
"%PREFIX%\python.exe" -m pip install --upgrade pip
REM install extra deps
"%PREFIX%\python.exe" -m pip install paho-mqtt opentrons_shared_data
"%PREFIX%\python.exe" -m pip install git+https://github.com/Xuwznln/pylabrobot.git

View File

@@ -0,0 +1,9 @@
#!/usr/bin/env bash
set -euxo pipefail
# make sure pip is available
"$PREFIX/bin/python" -m pip install --upgrade pip
# install extra deps
"$PREFIX/bin/python" -m pip install paho-mqtt opentrons_shared_data
"$PREFIX/bin/python" -m pip install git+https://github.com/Xuwznln/pylabrobot.git

26
.cursorignore Normal file
View File

@@ -0,0 +1,26 @@
.conda
# .github
.idea
# .vscode
output
pylabrobot_repo
recipes
scripts
service
temp
# unilabos/test
# unilabos/app/web
unilabos/device_mesh
unilabos_data
unilabos_msgs
unilabos.egg-info
CONTRIBUTORS
# LICENSE
MANIFEST.in
pyrightconfig.json
# README.md
# README_zh.md
setup.py
setup.cfg
.gitattrubutes
**/__pycache__

View File

@@ -1,19 +0,0 @@
version: 2
updates:
# GitHub Actions
- package-ecosystem: "github-actions"
directory: "/"
target-branch: "dev"
schedule:
interval: "weekly"
day: "monday"
time: "06:00"
open-pull-requests-limit: 5
reviewers:
- "msgcenterpy-team"
labels:
- "dependencies"
- "github-actions"
commit-message:
prefix: "ci"
include: "scope"

View File

@@ -1,67 +0,0 @@
name: CI Check
on:
push:
branches: [main, dev]
pull_request:
branches: [main, dev]
jobs:
registry-check:
runs-on: windows-latest
env:
# Fix Unicode encoding issue on Windows runner (cp1252 -> utf-8)
PYTHONIOENCODING: utf-8
PYTHONUTF8: 1
defaults:
run:
shell: cmd
steps:
- uses: actions/checkout@v6
with:
fetch-depth: 0
- name: Setup Miniforge
uses: conda-incubator/setup-miniconda@v3
with:
miniforge-version: latest
use-mamba: true
channels: robostack-staging,conda-forge,uni-lab
channel-priority: flexible
activate-environment: check-env
auto-update-conda: false
show-channel-urls: true
- name: Install ROS dependencies, uv and unilabos-msgs
run: |
echo Installing ROS dependencies...
mamba install -n check-env conda-forge::uv conda-forge::opencv robostack-staging::ros-humble-ros-core robostack-staging::ros-humble-action-msgs robostack-staging::ros-humble-std-msgs robostack-staging::ros-humble-geometry-msgs robostack-staging::ros-humble-control-msgs robostack-staging::ros-humble-nav2-msgs uni-lab::ros-humble-unilabos-msgs robostack-staging::ros-humble-cv-bridge robostack-staging::ros-humble-vision-opencv robostack-staging::ros-humble-tf-transformations robostack-staging::ros-humble-moveit-msgs robostack-staging::ros-humble-tf2-ros robostack-staging::ros-humble-tf2-ros-py conda-forge::transforms3d -c robostack-staging -c conda-forge -c uni-lab -y
- name: Install pip dependencies and unilabos
run: |
call conda activate check-env
echo Installing pip dependencies...
uv pip install -r unilabos/utils/requirements.txt
uv pip install pywinauto git+https://github.com/Xuwznln/pylabrobot.git
uv pip uninstall enum34 || echo enum34 not installed, skipping
uv pip install -e .
- name: Run check mode (complete_registry)
run: |
call conda activate check-env
echo Running check mode...
python -m unilabos --check_mode --skip_env_check
- name: Check for uncommitted changes
shell: bash
run: |
if ! git diff --exit-code; then
echo "::error::检测到文件变化!请先在本地运行 'python -m unilabos --complete_registry' 并提交变更"
echo "变化的文件:"
git diff --name-only
exit 1
fi
echo "检查通过:无文件变化"

View File

@@ -13,11 +13,6 @@ on:
required: false
default: 'win-64'
type: string
build_full:
description: '是否构建完整版 unilabos-full (默认构建轻量版 unilabos)'
required: false
default: false
type: boolean
jobs:
build-conda-pack:
@@ -62,7 +57,7 @@ jobs:
echo "should_build=false" >> $GITHUB_OUTPUT
fi
- uses: actions/checkout@v6
- uses: actions/checkout@v4
if: steps.should_build.outputs.should_build == 'true'
with:
ref: ${{ github.event.inputs.branch }}
@@ -74,7 +69,7 @@ jobs:
with:
miniforge-version: latest
use-mamba: true
python-version: '3.11.14'
python-version: '3.11.11'
channels: conda-forge,robostack-staging,uni-lab,defaults
channel-priority: flexible
activate-environment: unilab
@@ -86,14 +81,7 @@ jobs:
run: |
echo Installing unilabos and dependencies to unilab environment...
echo Using mamba for faster and more reliable dependency resolution...
echo Build full: ${{ github.event.inputs.build_full }}
if "${{ github.event.inputs.build_full }}"=="true" (
echo Installing unilabos-full ^(complete package^)...
mamba install -n unilab uni-lab::unilabos-full conda-pack -c uni-lab -c robostack-staging -c conda-forge -y
) else (
echo Installing unilabos ^(minimal package^)...
mamba install -n unilab uni-lab::unilabos conda-pack -c uni-lab -c robostack-staging -c conda-forge -y
)
mamba install -n unilab uni-lab::unilabos conda-pack -c uni-lab -c robostack-staging -c conda-forge -y
- name: Install conda-pack, unilabos and dependencies (Unix)
if: steps.should_build.outputs.should_build == 'true' && matrix.platform != 'win-64'
@@ -101,14 +89,7 @@ jobs:
run: |
echo "Installing unilabos and dependencies to unilab environment..."
echo "Using mamba for faster and more reliable dependency resolution..."
echo "Build full: ${{ github.event.inputs.build_full }}"
if [[ "${{ github.event.inputs.build_full }}" == "true" ]]; then
echo "Installing unilabos-full (complete package)..."
mamba install -n unilab uni-lab::unilabos-full conda-pack -c uni-lab -c robostack-staging -c conda-forge -y
else
echo "Installing unilabos (minimal package)..."
mamba install -n unilab uni-lab::unilabos conda-pack -c uni-lab -c robostack-staging -c conda-forge -y
fi
mamba install -n unilab uni-lab::unilabos conda-pack -c uni-lab -c robostack-staging -c conda-forge -y
- name: Get latest ros-humble-unilabos-msgs version (Windows)
if: steps.should_build.outputs.should_build == 'true' && matrix.platform == 'win-64'
@@ -312,7 +293,7 @@ jobs:
- name: Upload distribution package
if: steps.should_build.outputs.should_build == 'true'
uses: actions/upload-artifact@v6
uses: actions/upload-artifact@v4
with:
name: unilab-pack-${{ matrix.platform }}-${{ github.event.inputs.branch }}
path: dist-package/
@@ -327,12 +308,7 @@ jobs:
echo ==========================================
echo Platform: ${{ matrix.platform }}
echo Branch: ${{ github.event.inputs.branch }}
echo Python version: 3.11.14
if "${{ github.event.inputs.build_full }}"=="true" (
echo Package: unilabos-full ^(complete^)
) else (
echo Package: unilabos ^(minimal^)
)
echo Python version: 3.11.11
echo.
echo Distribution package contents:
dir dist-package
@@ -352,12 +328,7 @@ jobs:
echo "=========================================="
echo "Platform: ${{ matrix.platform }}"
echo "Branch: ${{ github.event.inputs.branch }}"
echo "Python version: 3.11.14"
if [[ "${{ github.event.inputs.build_full }}" == "true" ]]; then
echo "Package: unilabos-full (complete)"
else
echo "Package: unilabos (minimal)"
fi
echo "Python version: 3.11.11"
echo ""
echo "Distribution package contents:"
ls -lh dist-package/

View File

@@ -1,12 +1,10 @@
name: Deploy Docs
on:
# 在 CI Check 成功后自动触发(仅 main 分支)
workflow_run:
workflows: ["CI Check"]
types: [completed]
push:
branches: [main]
pull_request:
branches: [main]
# 手动触发
workflow_dispatch:
inputs:
branch:
@@ -35,19 +33,12 @@ concurrency:
jobs:
# Build documentation
build:
# 只在以下情况运行:
# 1. workflow_run 触发且 CI Check 成功
# 2. 手动触发
if: |
github.event_name == 'workflow_dispatch' ||
(github.event_name == 'workflow_run' && github.event.workflow_run.conclusion == 'success')
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v6
uses: actions/checkout@v4
with:
# workflow_run 时使用触发工作流的分支,手动触发时使用输入的分支
ref: ${{ github.event.workflow_run.head_branch || github.event.inputs.branch || github.ref }}
ref: ${{ github.event.inputs.branch || github.ref }}
fetch-depth: 0
- name: Setup Miniforge (with mamba)
@@ -55,7 +46,7 @@ jobs:
with:
miniforge-version: latest
use-mamba: true
python-version: '3.11.14'
python-version: '3.11.11'
channels: conda-forge,robostack-staging,uni-lab,defaults
channel-priority: flexible
activate-environment: unilab
@@ -84,10 +75,8 @@ jobs:
- name: Setup Pages
id: pages
uses: actions/configure-pages@v5
if: |
github.event.workflow_run.head_branch == 'main' ||
(github.event_name == 'workflow_dispatch' && github.event.inputs.deploy_to_pages == 'true')
uses: actions/configure-pages@v4
if: github.ref == 'refs/heads/main' || (github.event_name == 'workflow_dispatch' && github.event.inputs.deploy_to_pages == 'true')
- name: Build Sphinx documentation
run: |
@@ -105,18 +94,14 @@ jobs:
test -f docs/_build/html/index.html && echo "✓ index.html exists" || echo "✗ index.html missing"
- name: Upload build artifacts
uses: actions/upload-pages-artifact@v4
if: |
github.event.workflow_run.head_branch == 'main' ||
(github.event_name == 'workflow_dispatch' && github.event.inputs.deploy_to_pages == 'true')
uses: actions/upload-pages-artifact@v3
if: github.ref == 'refs/heads/main' || (github.event_name == 'workflow_dispatch' && github.event.inputs.deploy_to_pages == 'true')
with:
path: docs/_build/html
# Deploy to GitHub Pages
deploy:
if: |
github.event.workflow_run.head_branch == 'main' ||
(github.event_name == 'workflow_dispatch' && github.event.inputs.deploy_to_pages == 'true')
if: github.ref == 'refs/heads/main' || (github.event_name == 'workflow_dispatch' && github.event.inputs.deploy_to_pages == 'true')
environment:
name: github-pages
url: ${{ steps.deployment.outputs.page_url }}

View File

@@ -1,16 +1,11 @@
name: Multi-Platform Conda Build
on:
# 在 CI Check 工作流完成后触发(仅限 main/dev 分支)
workflow_run:
workflows: ["CI Check"]
types:
- completed
branches: [main, dev]
# 支持 tag 推送(不依赖 CI Check
push:
branches: [main, dev]
tags: ['v*']
# 手动触发
pull_request:
branches: [main, dev]
workflow_dispatch:
inputs:
platforms:
@@ -22,37 +17,9 @@ on:
required: false
default: false
type: boolean
skip_ci_check:
description: '跳过等待 CI Check (手动触发时可选)'
required: false
default: false
type: boolean
jobs:
# 等待 CI Check 完成的 job (仅用于 workflow_run 触发)
wait-for-ci:
runs-on: ubuntu-latest
if: github.event_name == 'workflow_run'
outputs:
should_continue: ${{ steps.check.outputs.should_continue }}
steps:
- name: Check CI status
id: check
run: |
if [[ "${{ github.event.workflow_run.conclusion }}" == "success" ]]; then
echo "should_continue=true" >> $GITHUB_OUTPUT
echo "CI Check passed, proceeding with build"
else
echo "should_continue=false" >> $GITHUB_OUTPUT
echo "CI Check did not succeed (status: ${{ github.event.workflow_run.conclusion }}), skipping build"
fi
build:
needs: [wait-for-ci]
# 运行条件workflow_run 触发且 CI 成功,或者其他触发方式
if: |
always() &&
(needs.wait-for-ci.result == 'skipped' || needs.wait-for-ci.outputs.should_continue == 'true')
strategy:
fail-fast: false
matrix:
@@ -77,10 +44,8 @@ jobs:
shell: bash -l {0}
steps:
- uses: actions/checkout@v6
- uses: actions/checkout@v4
with:
# 如果是 workflow_run 触发,使用触发 CI Check 的 commit
ref: ${{ github.event.workflow_run.head_sha || github.ref }}
fetch-depth: 0
- name: Check if platform should be built
@@ -104,6 +69,7 @@ jobs:
channels: conda-forge,robostack-staging,defaults
channel-priority: strict
activate-environment: build-env
auto-activate-base: false
auto-update-conda: false
show-channel-urls: true
@@ -149,7 +115,7 @@ jobs:
- name: Upload conda package artifacts
if: steps.should_build.outputs.should_build == 'true'
uses: actions/upload-artifact@v6
uses: actions/upload-artifact@v4
with:
name: conda-package-${{ matrix.platform }}
path: conda-packages-temp

View File

@@ -1,62 +1,25 @@
name: UniLabOS Conda Build
on:
# 在 CI Check 成功后自动触发
workflow_run:
workflows: ["CI Check"]
types: [completed]
branches: [main, dev]
# 标签推送时直接触发(发布版本)
push:
branches: [main, dev]
tags: ['v*']
# 手动触发
pull_request:
branches: [main, dev]
workflow_dispatch:
inputs:
platforms:
description: '选择构建平台 (逗号分隔): linux-64, osx-64, osx-arm64, win-64'
required: false
default: 'linux-64'
build_full:
description: '是否构建 unilabos-full 完整包 (默认只构建 unilabos 基础包)'
required: false
default: false
type: boolean
upload_to_anaconda:
description: '是否上传到Anaconda.org'
required: false
default: false
type: boolean
skip_ci_check:
description: '跳过等待 CI Check (手动触发时可选)'
required: false
default: false
type: boolean
jobs:
# 等待 CI Check 完成的 job (仅用于 workflow_run 触发)
wait-for-ci:
runs-on: ubuntu-latest
if: github.event_name == 'workflow_run'
outputs:
should_continue: ${{ steps.check.outputs.should_continue }}
steps:
- name: Check CI status
id: check
run: |
if [[ "${{ github.event.workflow_run.conclusion }}" == "success" ]]; then
echo "should_continue=true" >> $GITHUB_OUTPUT
echo "CI Check passed, proceeding with build"
else
echo "should_continue=false" >> $GITHUB_OUTPUT
echo "CI Check did not succeed (status: ${{ github.event.workflow_run.conclusion }}), skipping build"
fi
build:
needs: [wait-for-ci]
# 运行条件workflow_run 触发且 CI 成功,或者其他触发方式
if: |
always() &&
(needs.wait-for-ci.result == 'skipped' || needs.wait-for-ci.outputs.should_continue == 'true')
strategy:
fail-fast: false
matrix:
@@ -77,10 +40,8 @@ jobs:
shell: bash -l {0}
steps:
- uses: actions/checkout@v6
- uses: actions/checkout@v4
with:
# 如果是 workflow_run 触发,使用触发 CI Check 的 commit
ref: ${{ github.event.workflow_run.head_sha || github.ref }}
fetch-depth: 0
- name: Check if platform should be built
@@ -104,6 +65,7 @@ jobs:
channels: conda-forge,robostack-staging,uni-lab,defaults
channel-priority: strict
activate-environment: build-env
auto-activate-base: false
auto-update-conda: false
show-channel-urls: true
@@ -119,33 +81,12 @@ jobs:
conda list | grep -E "(rattler-build|anaconda-client)"
echo "Platform: ${{ matrix.platform }}"
echo "OS: ${{ matrix.os }}"
echo "Build full package: ${{ github.event.inputs.build_full || 'false' }}"
echo "Building packages:"
echo " - unilabos-env (environment dependencies)"
echo " - unilabos (with pip package)"
if [[ "${{ github.event.inputs.build_full }}" == "true" ]]; then
echo " - unilabos-full (complete package)"
fi
echo "Building UniLabOS package"
- name: Build unilabos-env (conda environment only, noarch)
- name: Build conda package
if: steps.should_build.outputs.should_build == 'true'
run: |
echo "Building unilabos-env (conda environment dependencies)..."
rattler-build build -r .conda/environment/recipe.yaml -c uni-lab -c robostack-staging -c conda-forge
- name: Build unilabos (with pip package)
if: steps.should_build.outputs.should_build == 'true'
run: |
echo "Building unilabos package..."
rattler-build build -r .conda/base/recipe.yaml -c uni-lab -c robostack-staging -c conda-forge --channel ./output
- name: Build unilabos-full - Only when explicitly requested
if: |
steps.should_build.outputs.should_build == 'true' &&
github.event.inputs.build_full == 'true'
run: |
echo "Building unilabos-full package on ${{ matrix.platform }}..."
rattler-build build -r .conda/full/recipe.yaml -c uni-lab -c robostack-staging -c conda-forge --channel ./output
rattler-build build -r .conda/recipe.yaml -c uni-lab -c robostack-staging -c conda-forge
- name: List built packages
if: steps.should_build.outputs.should_build == 'true'
@@ -167,7 +108,7 @@ jobs:
- name: Upload conda package artifacts
if: steps.should_build.outputs.should_build == 'true'
uses: actions/upload-artifact@v6
uses: actions/upload-artifact@v4
with:
name: conda-package-unilabos-${{ matrix.platform }}
path: conda-packages-temp

1
.gitignore vendored
View File

@@ -4,7 +4,6 @@ temp/
output/
unilabos_data/
pyrightconfig.json
.cursorignore
## Python
# Byte-compiled / optimized / DLL files

View File

@@ -1,5 +1,4 @@
recursive-include unilabos/test *
recursive-include unilabos/utils *
recursive-include unilabos/registry *.yaml
recursive-include unilabos/app/web/static *
recursive-include unilabos/app/web/templates *

View File

@@ -31,46 +31,26 @@ Detailed documentation can be found at:
## Quick Start
### 1. Setup Conda Environment
1. Setup Conda Environment
Uni-Lab-OS recommends using `mamba` for environment management. Choose the package that fits your needs:
| Package | Use Case | Contents |
|---------|----------|----------|
| `unilabos` | **Recommended for most users** | Complete package, ready to use |
| `unilabos-env` | Developers (editable install) | Environment only, install unilabos via pip |
| `unilabos-full` | Simulation/Visualization | unilabos + ROS2 Desktop + Gazebo + MoveIt |
Uni-Lab-OS recommends using `mamba` for environment management:
```bash
# Create new environment
mamba create -n unilab python=3.11.14
mamba create -n unilab python=3.11.11
mamba activate unilab
# Option A: Standard installation (recommended for most users)
mamba install uni-lab::unilabos -c robostack-staging -c conda-forge
# Option B: For developers (editable mode development)
mamba install uni-lab::unilabos-env -c robostack-staging -c conda-forge
# Then install unilabos and dependencies:
git clone https://github.com/deepmodeling/Uni-Lab-OS.git && cd Uni-Lab-OS
pip install -e .
uv pip install -r unilabos/utils/requirements.txt
# Option C: Full installation (simulation/visualization)
mamba install uni-lab::unilabos-full -c robostack-staging -c conda-forge
mamba install -n unilab uni-lab::unilabos -c robostack-staging -c conda-forge
```
**When to use which?**
- **unilabos**: Standard installation for production deployment and general usage (recommended)
- **unilabos-env**: For developers who need `pip install -e .` editable mode, modify source code
- **unilabos-full**: For simulation (Gazebo), visualization (rviz2), and Jupyter notebooks
### 2. Clone Repository (Optional, for developers)
2. Install Dev Uni-Lab-OS
```bash
# Clone the repository (only needed for development or examples)
# Clone the repository
git clone https://github.com/deepmodeling/Uni-Lab-OS.git
cd Uni-Lab-OS
# Install Uni-Lab-OS
pip install .
```
3. Start Uni-Lab System

View File

@@ -31,46 +31,26 @@ Uni-Lab-OS 是一个用于实验室自动化的综合平台,旨在连接和控
## 快速开始
### 1. 配置 Conda 环境
1. 配置 Conda 环境
Uni-Lab-OS 建议使用 `mamba` 管理环境。根据您的需求选择合适的安装包:
| 安装包 | 适用场景 | 包含内容 |
|--------|----------|----------|
| `unilabos` | **推荐大多数用户** | 完整安装包,开箱即用 |
| `unilabos-env` | 开发者(可编辑安装) | 仅环境依赖,通过 pip 安装 unilabos |
| `unilabos-full` | 仿真/可视化 | unilabos + ROS2 桌面版 + Gazebo + MoveIt |
Uni-Lab-OS 建议使用 `mamba` 管理环境。根据您的操作系统选择适当的环境文件:
```bash
# 创建新环境
mamba create -n unilab python=3.11.14
mamba create -n unilab python=3.11.11
mamba activate unilab
# 方案 A标准安装推荐大多数用户
mamba install uni-lab::unilabos -c robostack-staging -c conda-forge
# 方案 B开发者环境可编辑模式开发
mamba install uni-lab::unilabos-env -c robostack-staging -c conda-forge
# 然后安装 unilabos 和依赖:
git clone https://github.com/deepmodeling/Uni-Lab-OS.git && cd Uni-Lab-OS
pip install -e .
uv pip install -r unilabos/utils/requirements.txt
# 方案 C完整安装仿真/可视化)
mamba install uni-lab::unilabos-full -c robostack-staging -c conda-forge
mamba install -n unilab uni-lab::unilabos -c robostack-staging -c conda-forge
```
**如何选择?**
- **unilabos**:标准安装,适用于生产部署和日常使用(推荐)
- **unilabos-env**:开发者使用,支持 `pip install -e .` 可编辑模式,可修改源代码
- **unilabos-full**需要仿真Gazebo、可视化rviz2或 Jupyter Notebook
### 2. 克隆仓库(可选,供开发者使用)
2. 安装开发版 Uni-Lab-OS:
```bash
# 克隆仓库(仅开发或查看示例时需要)
# 克隆仓库
git clone https://github.com/deepmodeling/Uni-Lab-OS.git
cd Uni-Lab-OS
# 安装 Uni-Lab-OS
pip install .
```
3. 启动 Uni-Lab 系统

View File

@@ -31,14 +31,6 @@
详细的安装步骤请参考 [安装指南](installation.md)。
**选择合适的安装包:**
| 安装包 | 适用场景 | 包含组件 |
|--------|----------|----------|
| `unilabos` | **推荐大多数用户**,生产部署 | 完整安装包,开箱即用 |
| `unilabos-env` | 开发者(可编辑安装) | 仅环境依赖,通过 pip 安装 unilabos |
| `unilabos-full` | 仿真/可视化 | unilabos + 完整 ROS2 桌面版 + Gazebo + MoveIt |
**关键步骤:**
```bash
@@ -46,30 +38,15 @@
# 下载 Miniforge: https://github.com/conda-forge/miniforge/releases
# 2. 创建 Conda 环境
mamba create -n unilab python=3.11.14
mamba create -n unilab python=3.11.11
# 3. 激活环境
mamba activate unilab
# 4. 安装 Uni-Lab-OS(选择其一)
# 方案 A标准安装推荐大多数用户
# 4. 安装 Uni-Lab-OS
mamba install uni-lab::unilabos -c robostack-staging -c conda-forge
# 方案 B开发者环境可编辑模式开发
mamba install uni-lab::unilabos-env -c robostack-staging -c conda-forge
pip install -e /path/to/Uni-Lab-OS # 可编辑安装
uv pip install -r unilabos/utils/requirements.txt # 安装 pip 依赖
# 方案 C完整版仿真/可视化)
mamba install uni-lab::unilabos-full -c robostack-staging -c conda-forge
```
**选择建议:**
- **日常使用/生产部署**:使用 `unilabos`(推荐),完整功能,开箱即用
- **开发者**:使用 `unilabos-env` + `pip install -e .` + `uv pip install -r unilabos/utils/requirements.txt`,代码修改立即生效
- **仿真/可视化**:使用 `unilabos-full`,含 Gazebo、rviz2、MoveIt
#### 1.2 验证安装
```bash
@@ -791,43 +768,7 @@ Waiting for host service...
详细的设备驱动编写指南请参考 [添加设备驱动](../developer_guide/add_device.md)。
#### 9.1 开发环境准备
**推荐使用 `unilabos-env` + `pip install -e .` + `uv pip install`** 进行设备开发:
```bash
# 1. 创建环境并安装 unilabos-envROS2 + conda 依赖 + uv
mamba create -n unilab python=3.11.14
conda activate unilab
mamba install uni-lab::unilabos-env -c robostack-staging -c conda-forge
# 2. 克隆代码
git clone https://github.com/deepmodeling/Uni-Lab-OS.git
cd Uni-Lab-OS
# 3. 以可编辑模式安装(推荐使用脚本,自动检测中文环境)
python scripts/dev_install.py
# 或手动安装:
pip install -e .
uv pip install -r unilabos/utils/requirements.txt
```
**为什么使用这种方式?**
- `unilabos-env` 提供 ROS2 核心组件和 uv通过 conda 安装,避免编译)
- `unilabos/utils/requirements.txt` 包含所有运行时需要的 pip 依赖
- `dev_install.py` 自动检测中文环境,中文系统自动使用清华镜像
- 使用 `uv` 替代 `pip`,安装速度更快
- 可编辑模式:代码修改**立即生效**,无需重新安装
**如果安装失败或速度太慢**,可以手动执行(使用清华镜像):
```bash
pip install -e . -i https://mirrors.tuna.tsinghua.edu.cn/pypi/web/simple
uv pip install -r unilabos/utils/requirements.txt -i https://mirrors.tuna.tsinghua.edu.cn/pypi/web/simple
```
#### 9.2 为什么需要自定义设备?
#### 9.1 为什么需要自定义设备?
Uni-Lab-OS 内置了常见设备,但您的实验室可能有特殊设备需要集成:
@@ -836,7 +777,7 @@ Uni-Lab-OS 内置了常见设备,但您的实验室可能有特殊设备需要
- 特殊的实验流程
- 第三方设备集成
#### 9.3 创建 Python 包
#### 9.2 创建 Python 包
为了方便开发和管理,建议为您的实验室创建独立的 Python 包。
@@ -873,7 +814,7 @@ touch my_lab_devices/my_lab_devices/__init__.py
touch my_lab_devices/my_lab_devices/devices/__init__.py
```
#### 9.4 创建 setup.py
#### 9.3 创建 setup.py
```python
# my_lab_devices/setup.py
@@ -904,7 +845,7 @@ setup(
)
```
#### 9.5 开发安装
#### 9.4 开发安装
使用 `-e` 参数进行可编辑安装,这样代码修改后立即生效:
@@ -919,7 +860,7 @@ pip install -e . -i https://mirrors.tuna.tsinghua.edu.cn/pypi/web/simple
- 方便调试和测试
- 支持版本控制git
#### 9.6 编写设备驱动
#### 9.5 编写设备驱动
创建设备驱动文件:
@@ -1060,7 +1001,7 @@ class MyPump:
- **返回 Dict**:所有动作方法返回字典类型
- **文档字符串**:详细说明参数和功能
#### 9.7 测试设备驱动
#### 9.6 测试设备驱动
创建简单的测试脚本:

View File

@@ -13,26 +13,15 @@
- 开发者需要 Git 和基本的 Python 开发知识
- 自定义 msgs 需要 GitHub 账号
## 安装包选择
Uni-Lab-OS 提供三个安装包版本,根据您的需求选择:
| 安装包 | 适用场景 | 包含组件 | 磁盘占用 |
|--------|----------|----------|----------|
| **unilabos** | **推荐大多数用户**,生产部署 | 完整安装包,开箱即用 | ~2-3 GB |
| **unilabos-env** | 开发者环境(可编辑安装) | 仅环境依赖,通过 pip 安装 unilabos | ~2 GB |
| **unilabos-full** | 仿真可视化、完整功能体验 | unilabos + 完整 ROS2 桌面版 + Gazebo + MoveIt | ~8-10 GB |
## 安装方式选择
根据您的使用场景,选择合适的安装方式:
| 安装方式 | 适用人群 | 推荐安装包 | 特点 | 安装时间 |
| ---------------------- | -------------------- | ----------------- | ------------------------------ | ---------------------------- |
| **方式一:一键安装** | 快速体验、演示 | 预打包环境 | 离线可用,无需配置 | 5-10 分钟 (网络良好的情况下) |
| **方式二:手动安装** | **大多数用户** | `unilabos` | 完整功能,开箱即用 | 10-20 分钟 |
| **方式三:开发者安装** | 开发者、需要修改源码 | `unilabos-env` | 可编辑模式,支持自定义开发 | 20-30 分钟 |
| **仿真/可视化** | 仿真测试、可视化调试 | `unilabos-full` | 含 Gazebo、rviz2、MoveIt | 30-60 分钟 |
| 安装方式 | 适用人群 | 特点 | 安装时间 |
| ---------------------- | -------------------- | ------------------------------ | ---------------------------- |
| **方式一:一键安装** | 实验室用户、快速体验 | 预打包环境,离线可用,无需配置 | 5-10 分钟 (网络良好的情况下) |
| **方式二:手动安装** | 标准用户、生产环境 | 灵活配置,版本可控 | 10-20 分钟 |
| **方式三:开发者安装** | 开发者、需要修改源码 | 可编辑模式,支持自定义 msgs | 20-30 分钟 |
---
@@ -155,38 +144,17 @@ bash Miniforge3-$(uname)-$(uname -m).sh
使用以下命令创建 Uni-Lab 专用环境:
```bash
mamba create -n unilab python=3.11.14 # 目前ros2组件依赖版本大多为3.11.14
mamba create -n unilab python=3.11.11 # 目前ros2组件依赖版本大多为3.11.11
mamba activate unilab
# 选择安装包(三选一):
# 方案 A标准安装推荐大多数用户
mamba install uni-lab::unilabos -c robostack-staging -c conda-forge
# 方案 B开发者环境可编辑模式开发
mamba install uni-lab::unilabos-env -c robostack-staging -c conda-forge
# 然后安装 unilabos 和 pip 依赖:
git clone https://github.com/deepmodeling/Uni-Lab-OS.git && cd Uni-Lab-OS
pip install -e .
uv pip install -r unilabos/utils/requirements.txt
# 方案 C完整版含仿真和可视化工具
mamba install uni-lab::unilabos-full -c robostack-staging -c conda-forge
mamba install -n unilab uni-lab::unilabos -c robostack-staging -c conda-forge
```
**参数说明**:
- `-n unilab`: 创建名为 "unilab" 的环境
- `uni-lab::unilabos`: 安装 unilabos 完整包,开箱即用(推荐)
- `uni-lab::unilabos-env`: 仅安装环境依赖,适合开发者使用 `pip install -e .`
- `uni-lab::unilabos-full`: 安装完整包(含 ROS2 Desktop、Gazebo、MoveIt 等)
- `uni-lab::unilabos`: 从 uni-lab channel 安装 unilabos 包
- `-c robostack-staging -c conda-forge`: 添加额外的软件源
**包选择建议**
- **日常使用/生产部署**:安装 `unilabos`(推荐,完整功能,开箱即用)
- **开发者**:安装 `unilabos-env`,然后使用 `uv pip install -r unilabos/utils/requirements.txt` 安装依赖,再 `pip install -e .` 进行可编辑安装
- **仿真/可视化**:安装 `unilabos-full`Gazebo、rviz2、MoveIt
**如果遇到网络问题**,可以使用清华镜像源加速下载:
```bash
@@ -195,14 +163,8 @@ mamba config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/m
mamba config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free/
mamba config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge/
# 然后重新执行安装命令(推荐标准安装)
# 然后重新执行安装命令
mamba create -n unilab uni-lab::unilabos -c robostack-staging
# 或完整版(仿真/可视化)
mamba create -n unilab uni-lab::unilabos-full -c robostack-staging
# pip 安装时使用清华镜像(开发者安装时使用)
uv pip install -r unilabos/utils/requirements.txt -i https://mirrors.tuna.tsinghua.edu.cn/pypi/web/simple
```
### 第三步:激活环境
@@ -241,87 +203,58 @@ cd Uni-Lab-OS
cd Uni-Lab-OS
```
### 第二步:安装开发环境unilabos-env
### 第二步:安装基础环境
**重要**:开发者请使用 `unilabos-env` 包,它专为开发者设计:
- 包含 ROS2 核心组件和消息包ros-humble-ros-core、std-msgs、geometry-msgs 等)
- 包含 transforms3d、cv-bridge、tf2 等 conda 依赖
- 包含 `uv` 工具,用于快速安装 pip 依赖
- **不包含** pip 依赖和 unilabos 包(由 `pip install -e .` 和 `uv pip install` 安装)
**推荐方式**:先通过**方式一(一键安装)**或**方式二(手动安装)**完成基础环境的安装这将包含所有必需的依赖项ROS2、msgs 等)。
#### 选项 A通过一键安装推荐
参考上文"方式一:一键安装",完成基础环境的安装后,激活环境:
```bash
# 创建并激活环境
mamba create -n unilab python=3.11.14
conda activate unilab
# 安装开发者环境包ROS2 + conda 依赖 + uv
mamba install uni-lab::unilabos-env -c robostack-staging -c conda-forge
```
### 第三步:安装 pip 依赖和可编辑模式安装
#### 选项 B通过手动安装
克隆代码并安装依赖
参考上文"方式二:手动安装",创建并安装环境
```bash
mamba create -n unilab python=3.11.11
conda activate unilab
mamba install -n unilab uni-lab::unilabos -c robostack-staging -c conda-forge
```
**说明**:这会安装包括 Python 3.11.11、ROS2 Humble、ros-humble-unilabos-msgs 和所有必需依赖
### 第三步:切换到开发版本
现在你已经有了一个完整可用的 Uni-Lab 环境,接下来将 unilabos 包切换为开发版本:
```bash
# 确保环境已激活
conda activate unilab
# 克隆仓库(如果还未克隆
git clone https://github.com/deepmodeling/Uni-Lab-OS.git
cd Uni-Lab-OS
# 卸载 pip 安装的 unilabos保留所有 conda 依赖
pip uninstall unilabos -y
# 切换到 dev 分支(可选
# 克隆 dev 分支(如果还未克隆
cd /path/to/your/workspace
git clone -b dev https://github.com/deepmodeling/Uni-Lab-OS.git
# 或者如果已经克隆,切换到 dev 分支
cd Uni-Lab-OS
git checkout dev
git pull
```
**推荐:使用安装脚本**(自动检测中文环境,使用 uv 加速):
```bash
# 自动检测中文环境,如果是中文系统则使用清华镜像
python scripts/dev_install.py
# 或者手动指定:
python scripts/dev_install.py --china # 强制使用清华镜像
python scripts/dev_install.py --no-mirror # 强制使用 PyPI
python scripts/dev_install.py --skip-deps # 跳过 pip 依赖安装
python scripts/dev_install.py --use-pip # 使用 pip 而非 uv
```
**手动安装**(如果脚本安装失败或速度太慢):
```bash
# 1. 安装 unilabos可编辑模式
pip install -e .
# 2. 使用 uv 安装 pip 依赖(推荐,速度更快)
uv pip install -r unilabos/utils/requirements.txt
# 国内用户使用清华镜像:
# 以可编辑模式安装开发版 unilabos
pip install -e . -i https://mirrors.tuna.tsinghua.edu.cn/pypi/web/simple
uv pip install -r unilabos/utils/requirements.txt -i https://mirrors.tuna.tsinghua.edu.cn/pypi/web/simple
```
**注意**
- `uv` 已包含在 `unilabos-env` 中,无需单独安装
- `unilabos/utils/requirements.txt` 包含运行 unilabos 所需的所有 pip 依赖
- 部分特殊包(如 pylabrobot会在运行时由 unilabos 自动检测并安装
**参数说明**
**为什么使用可编辑模式?**
- `-e` (editable mode):代码修改**立即生效**,无需重新安装
- 适合开发调试:修改代码后直接运行测试
- 与 `unilabos-env` 配合:环境依赖由 conda 管理unilabos 代码由 pip 管理
**验证安装**
```bash
# 检查 unilabos 版本
python -c "import unilabos; print(unilabos.__version__)"
# 检查安装位置(应该指向你的代码目录)
pip show unilabos | grep Location
```
- `-e`: editable mode可编辑模式代码修改立即生效无需重新安装
- `-i`: 使用清华镜像源加速下载
- `pip uninstall unilabos`: 只卸载 pip 安装的 unilabos 包,不影响 conda 安装的其他依赖(如 ROS2、msgs 等)
### 第四步:安装或自定义 ros-humble-unilabos-msgs可选
@@ -531,45 +464,7 @@ cd $CONDA_PREFIX/envs/unilab
### 问题 8: 环境很大,有办法减小吗?
**解决方案**:
1. **使用 `unilabos` 标准版**(推荐大多数用户):
```bash
mamba install uni-lab::unilabos -c robostack-staging -c conda-forge
```
标准版包含完整功能,环境大小约 2-3GB相比完整版的 8-10GB
2. **使用 `unilabos-env` 开发者版**(最小化):
```bash
mamba install uni-lab::unilabos-env -c robostack-staging -c conda-forge
# 然后手动安装依赖
pip install -e .
uv pip install -r unilabos/utils/requirements.txt
```
开发者版只包含环境依赖,体积最小约 2GB。
3. **按需安装额外组件**
如果后续需要特定功能,可以单独安装:
```bash
# 需要 Jupyter
mamba install jupyter jupyros
# 需要可视化
mamba install matplotlib opencv
# 需要仿真(注意:这会安装大量依赖)
mamba install ros-humble-gazebo-ros
```
4. **预打包环境问题**
预打包环境(方式一)包含所有依赖,通常较大(压缩后 2-5GB。这是为了确保离线安装和完整功能。
**包选择建议**
| 需求 | 推荐包 | 预估大小 |
|------|--------|----------|
| 日常使用/生产部署 | `unilabos` | ~2-3 GB |
| 开发调试(可编辑模式) | `unilabos-env` | ~2 GB |
| 仿真/可视化 | `unilabos-full` | ~8-10 GB |
**解决方案**: 预打包的环境包含所有依赖,通常较大(压缩后 2-5GB。这是为了确保离线安装和完整功能。如果空间有限考虑使用方式二手动安装只安装需要的组件。
### 问题 9: 如何更新到最新版本?
@@ -616,7 +511,6 @@ mamba update ros-humble-unilabos-msgs -c uni-lab -c robostack-staging -c conda-f
**提示**:
- **大多数用户**推荐使用方式二(手动安装)的 `unilabos` 标准版
- **开发者**推荐使用方式三(开发者安装),安装 `unilabos-env` 后使用 `uv pip install -r unilabos/utils/requirements.txt` 安装依赖
- **仿真/可视化**推荐安装 `unilabos-full` 完整版
- **快速体验和演示**推荐使用方式一(一键安装)
- 生产环境推荐使用方式二(手动安装)的稳定版本
- 开发和测试推荐使用方式三(开发者安装)
- 快速体验和演示推荐使用方式一(一键安装)

View File

@@ -1,6 +1,6 @@
package:
name: ros-humble-unilabos-msgs
version: 0.10.16
version: 0.10.15
source:
path: ../../unilabos_msgs
target_directory: src
@@ -25,7 +25,7 @@ requirements:
build:
- ${{ compiler('cxx') }}
- ${{ compiler('c') }}
- python ==3.11.14
- python ==3.11.11
- numpy
- if: build_platform != target_platform
then:
@@ -63,14 +63,14 @@ requirements:
- robostack-staging::ros-humble-rosidl-default-generators
- robostack-staging::ros-humble-std-msgs
- robostack-staging::ros-humble-geometry-msgs
- robostack-staging::ros2-distro-mutex=0.7
- robostack-staging::ros2-distro-mutex=0.6
run:
- robostack-staging::ros-humble-action-msgs
- robostack-staging::ros-humble-ros-workspace
- robostack-staging::ros-humble-rosidl-default-runtime
- robostack-staging::ros-humble-std-msgs
- robostack-staging::ros-humble-geometry-msgs
- robostack-staging::ros2-distro-mutex=0.7
- robostack-staging::ros2-distro-mutex=0.6
- if: osx and x86_64
then:
- __osx >=${{ MACOSX_DEPLOYMENT_TARGET|default('10.14') }}

View File

@@ -1,6 +1,6 @@
package:
name: unilabos
version: "0.10.16"
version: "0.10.15"
source:
path: ../..

View File

@@ -85,7 +85,7 @@ Verification:
-------------
The verify_installation.py script will check:
- Python version (3.11.14)
- Python version (3.11.11)
- ROS2 rclpy installation
- UniLabOS installation and dependencies
@@ -104,7 +104,7 @@ Build Information:
Branch: {branch}
Platform: {platform}
Python: 3.11.14
Python: 3.11.11
Date: {build_date}
Troubleshooting:

View File

@@ -1,214 +0,0 @@
#!/usr/bin/env python3
"""
Development installation script for UniLabOS.
Auto-detects Chinese locale and uses appropriate mirror.
Usage:
python scripts/dev_install.py
python scripts/dev_install.py --no-mirror # Force no mirror
python scripts/dev_install.py --china # Force China mirror
python scripts/dev_install.py --skip-deps # Skip pip dependencies installation
Flow:
1. pip install -e . (install unilabos in editable mode)
2. Detect Chinese locale
3. Use uv to install pip dependencies from requirements.txt
4. Special packages (like pylabrobot) are handled by environment_check.py at runtime
"""
import locale
import subprocess
import sys
import argparse
from pathlib import Path
# Tsinghua mirror URL
TSINGHUA_MIRROR = "https://mirrors.tuna.tsinghua.edu.cn/pypi/web/simple"
def is_chinese_locale() -> bool:
"""
Detect if system is in Chinese locale.
Same logic as EnvironmentChecker._is_chinese_locale()
"""
try:
lang = locale.getdefaultlocale()[0]
if lang and ("zh" in lang.lower() or "chinese" in lang.lower()):
return True
except Exception:
pass
return False
def run_command(cmd: list, description: str, retry: int = 2) -> bool:
"""Run command with retry support."""
print(f"[INFO] {description}")
print(f"[CMD] {' '.join(cmd)}")
for attempt in range(retry + 1):
try:
result = subprocess.run(cmd, check=True, timeout=600)
print(f"[OK] {description}")
return True
except subprocess.CalledProcessError as e:
if attempt < retry:
print(f"[WARN] Attempt {attempt + 1} failed, retrying...")
else:
print(f"[ERROR] {description} failed: {e}")
return False
except subprocess.TimeoutExpired:
print(f"[ERROR] {description} timed out")
return False
return False
def install_editable(project_root: Path, use_mirror: bool) -> bool:
"""Install unilabos in editable mode using pip."""
cmd = [sys.executable, "-m", "pip", "install", "-e", str(project_root)]
if use_mirror:
cmd.extend(["-i", TSINGHUA_MIRROR])
return run_command(cmd, "Installing unilabos in editable mode")
def install_requirements_uv(requirements_file: Path, use_mirror: bool) -> bool:
"""Install pip dependencies using uv (installed via conda-forge::uv)."""
cmd = ["uv", "pip", "install", "-r", str(requirements_file)]
if use_mirror:
cmd.extend(["-i", TSINGHUA_MIRROR])
return run_command(cmd, "Installing pip dependencies with uv", retry=2)
def install_requirements_pip(requirements_file: Path, use_mirror: bool) -> bool:
"""Fallback: Install pip dependencies using pip."""
cmd = [sys.executable, "-m", "pip", "install", "-r", str(requirements_file)]
if use_mirror:
cmd.extend(["-i", TSINGHUA_MIRROR])
return run_command(cmd, "Installing pip dependencies with pip", retry=2)
def check_uv_available() -> bool:
"""Check if uv is available (installed via conda-forge::uv)."""
try:
subprocess.run(["uv", "--version"], capture_output=True, check=True)
return True
except (subprocess.CalledProcessError, FileNotFoundError):
return False
def main():
parser = argparse.ArgumentParser(description="Development installation script for UniLabOS")
parser.add_argument("--china", action="store_true", help="Force use China mirror (Tsinghua)")
parser.add_argument("--no-mirror", action="store_true", help="Force use default PyPI (no mirror)")
parser.add_argument(
"--skip-deps", action="store_true", help="Skip pip dependencies installation (only install unilabos)"
)
parser.add_argument("--use-pip", action="store_true", help="Use pip instead of uv for dependencies")
args = parser.parse_args()
# Determine project root
script_dir = Path(__file__).parent
project_root = script_dir.parent
requirements_file = project_root / "unilabos" / "utils" / "requirements.txt"
if not (project_root / "setup.py").exists():
print(f"[ERROR] setup.py not found in {project_root}")
sys.exit(1)
print("=" * 60)
print("UniLabOS Development Installation")
print("=" * 60)
print(f"Project root: {project_root}")
print()
# Determine mirror usage based on locale
if args.no_mirror:
use_mirror = False
print("[INFO] Mirror disabled by --no-mirror flag")
elif args.china:
use_mirror = True
print("[INFO] China mirror enabled by --china flag")
else:
use_mirror = is_chinese_locale()
if use_mirror:
print("[INFO] Chinese locale detected, using Tsinghua mirror")
else:
print("[INFO] Non-Chinese locale detected, using default PyPI")
print()
# Step 1: Install unilabos in editable mode
print("[STEP 1] Installing unilabos in editable mode...")
if not install_editable(project_root, use_mirror):
print("[ERROR] Failed to install unilabos")
print()
print("Manual fallback:")
if use_mirror:
print(f" pip install -e {project_root} -i {TSINGHUA_MIRROR}")
else:
print(f" pip install -e {project_root}")
sys.exit(1)
print()
# Step 2: Install pip dependencies
if args.skip_deps:
print("[INFO] Skipping pip dependencies installation (--skip-deps)")
else:
print("[STEP 2] Installing pip dependencies...")
if not requirements_file.exists():
print(f"[WARN] Requirements file not found: {requirements_file}")
print("[INFO] Skipping dependencies installation")
else:
# Try uv first (faster), fallback to pip
if args.use_pip:
print("[INFO] Using pip (--use-pip flag)")
success = install_requirements_pip(requirements_file, use_mirror)
elif check_uv_available():
print("[INFO] Using uv (installed via conda-forge::uv)")
success = install_requirements_uv(requirements_file, use_mirror)
if not success:
print("[WARN] uv failed, falling back to pip...")
success = install_requirements_pip(requirements_file, use_mirror)
else:
print("[WARN] uv not available (should be installed via: mamba install conda-forge::uv)")
print("[INFO] Falling back to pip...")
success = install_requirements_pip(requirements_file, use_mirror)
if not success:
print()
print("[WARN] Failed to install some dependencies automatically.")
print("You can manually install them:")
if use_mirror:
print(f" uv pip install -r {requirements_file} -i {TSINGHUA_MIRROR}")
print(" or:")
print(f" pip install -r {requirements_file} -i {TSINGHUA_MIRROR}")
else:
print(f" uv pip install -r {requirements_file}")
print(" or:")
print(f" pip install -r {requirements_file}")
print()
print("=" * 60)
print("Installation complete!")
print("=" * 60)
print()
print("Note: Some special packages (like pylabrobot) are installed")
print("automatically at runtime by unilabos if needed.")
print()
print("Verify installation:")
print(' python -c "import unilabos; print(unilabos.__version__)"')
print()
print("If you encounter issues, you can manually install dependencies:")
if use_mirror:
print(f" uv pip install -r unilabos/utils/requirements.txt -i {TSINGHUA_MIRROR}")
else:
print(" uv pip install -r unilabos/utils/requirements.txt")
print()
if __name__ == "__main__":
main()

View File

@@ -4,7 +4,7 @@ package_name = 'unilabos'
setup(
name=package_name,
version='0.10.16',
version='0.10.15',
packages=find_packages(),
include_package_data=True,
install_requires=['setuptools'],

View File

@@ -1 +1 @@
__version__ = "0.10.16"
__version__ = "0.10.15"

View File

@@ -1,6 +0,0 @@
"""Entry point for `python -m unilabos`."""
from unilabos.app.main import main
if __name__ == "__main__":
main()

View File

@@ -7,6 +7,7 @@ import sys
import threading
import time
from typing import Dict, Any, List
import networkx as nx
import yaml
@@ -16,9 +17,9 @@ unilabos_dir = os.path.dirname(os.path.dirname(current_dir))
if unilabos_dir not in sys.path:
sys.path.append(unilabos_dir)
from unilabos.app.utils import cleanup_for_restart
from unilabos.utils.banner_print import print_status, print_unilab_banner
from unilabos.config.config import load_config, BasicConfig, HTTPConfig
from unilabos.app.utils import cleanup_for_restart
# Global restart flags (used by ws_client and web/server)
_restart_requested: bool = False
@@ -160,12 +161,6 @@ def parse_args():
default=False,
help="Complete registry information",
)
parser.add_argument(
"--check_mode",
action="store_true",
default=False,
help="Run in check mode for CI: validates registry imports and ensures no file changes",
)
parser.add_argument(
"--no_update_feedback",
action="store_true",
@@ -216,10 +211,7 @@ def main():
args_dict = vars(args)
# 环境检查 - 检查并自动安装必需的包 (可选)
skip_env_check = args_dict.get("skip_env_check", False)
check_mode = args_dict.get("check_mode", False)
if not skip_env_check:
if not args_dict.get("skip_env_check", False):
from unilabos.utils.environment_check import check_environment
if not check_environment(auto_install=True):
@@ -230,21 +222,7 @@ def main():
# 加载配置文件优先加载config然后从env读取
config_path = args_dict.get("config")
if check_mode:
args_dict["working_dir"] = os.path.abspath(os.getcwd())
# 当 skip_env_check 时,默认使用当前目录作为 working_dir
if skip_env_check and not args_dict.get("working_dir") and not config_path:
working_dir = os.path.abspath(os.getcwd())
print_status(f"跳过环境检查模式:使用当前目录作为工作目录 {working_dir}", "info")
# 检查当前目录是否有 local_config.py
local_config_in_cwd = os.path.join(working_dir, "local_config.py")
if os.path.exists(local_config_in_cwd):
config_path = local_config_in_cwd
print_status(f"发现本地配置文件: {config_path}", "info")
else:
print_status(f"未指定config路径可通过 --config 传入 local_config.py 文件路径", "info")
elif os.getcwd().endswith("unilabos_data"):
if os.getcwd().endswith("unilabos_data"):
working_dir = os.path.abspath(os.getcwd())
else:
working_dir = os.path.abspath(os.path.join(os.getcwd(), "unilabos_data"))
@@ -263,7 +241,7 @@ def main():
working_dir = os.path.dirname(config_path)
elif os.path.exists(working_dir) and os.path.exists(os.path.join(working_dir, "local_config.py")):
config_path = os.path.join(working_dir, "local_config.py")
elif not skip_env_check and not config_path and (
elif not config_path and (
not os.path.exists(working_dir) or not os.path.exists(os.path.join(working_dir, "local_config.py"))
):
print_status(f"未指定config路径可通过 --config 传入 local_config.py 文件路径", "info")
@@ -277,11 +255,9 @@ def main():
print_status(f"已创建 local_config.py 路径: {config_path}", "info")
else:
os._exit(1)
# 加载配置文件 (check_mode 跳过)
# 加载配置文件
print_status(f"当前工作目录为 {working_dir}", "info")
if not check_mode:
load_config_from_file(config_path)
load_config_from_file(config_path)
# 根据配置重新设置日志级别
from unilabos.utils.log import configure_logger, logger
@@ -337,7 +313,6 @@ def main():
machine_name = "".join([c if c.isalnum() or c == "_" else "_" for c in machine_name])
BasicConfig.machine_name = machine_name
BasicConfig.vis_2d_enable = args_dict["2d_vis"]
BasicConfig.check_mode = check_mode
from unilabos.resources.graphio import (
read_node_link_json,
@@ -356,14 +331,10 @@ def main():
# 显示启动横幅
print_unilab_banner(args_dict)
# 注册表 - check_mode 时强制启用 complete_registry
complete_registry = args_dict.get("complete_registry", False) or check_mode
lab_registry = build_registry(args_dict["registry_path"], complete_registry, BasicConfig.upload_registry)
# Check mode: complete_registry 完成后直接退出git diff 检测由 CI workflow 执行
if check_mode:
print_status("Check mode: complete_registry 完成,退出", "info")
os._exit(0)
# 注册表
lab_registry = build_registry(
args_dict["registry_path"], args_dict.get("complete_registry", False), BasicConfig.upload_registry
)
if BasicConfig.upload_registry:
# 设备注册到服务端 - 需要 ak 和 sk

View File

@@ -4,40 +4,8 @@ UniLabOS 应用工具函数
提供清理、重启等工具函数
"""
import glob
import os
import shutil
import sys
def patch_rclpy_dll_windows():
"""在 Windows + conda 环境下为 rclpy 打 DLL 加载补丁"""
if sys.platform != "win32" or not os.environ.get("CONDA_PREFIX"):
return
try:
import rclpy
return
except ImportError as e:
if not str(e).startswith("DLL load failed"):
return
cp = os.environ["CONDA_PREFIX"]
impl = os.path.join(cp, "Lib", "site-packages", "rclpy", "impl", "implementation_singleton.py")
pyd = glob.glob(os.path.join(cp, "Lib", "site-packages", "rclpy", "_rclpy_pybind11*.pyd"))
if not os.path.exists(impl) or not pyd:
return
with open(impl, "r", encoding="utf-8") as f:
content = f.read()
lib_bin = os.path.join(cp, "Library", "bin").replace("\\", "/")
patch = f'# UniLabOS DLL Patch\nimport os,ctypes\nos.add_dll_directory("{lib_bin}") if hasattr(os,"add_dll_directory") else None\ntry: ctypes.CDLL("{pyd[0].replace(chr(92),"/")}")\nexcept: pass\n# End Patch\n'
shutil.copy2(impl, impl + ".bak")
with open(impl, "w", encoding="utf-8") as f:
f.write(patch + content)
patch_rclpy_dll_windows()
import gc
import os
import threading
import time

View File

@@ -58,14 +58,14 @@ class JobResultStore:
feedback=feedback or {},
timestamp=time.time(),
)
logger.trace(f"[JobResultStore] Stored result for job {job_id[:8]}, status={status}")
logger.debug(f"[JobResultStore] Stored result for job {job_id[:8]}, status={status}")
def get_and_remove(self, job_id: str) -> Optional[JobResult]:
"""获取并删除任务结果"""
with self._results_lock:
result = self._results.pop(job_id, None)
if result:
logger.trace(f"[JobResultStore] Retrieved and removed result for job {job_id[:8]}")
logger.debug(f"[JobResultStore] Retrieved and removed result for job {job_id[:8]}")
return result
def get_result(self, job_id: str) -> Optional[JobResult]:

View File

@@ -23,7 +23,7 @@ from typing import Optional, Dict, Any, List
from urllib.parse import urlparse
from enum import Enum
from typing_extensions import TypedDict
from jedi.inference.gradual.typing import TypedDict
from unilabos.app.model import JobAddReq
from unilabos.ros.nodes.presets.host_node import HostNode
@@ -154,7 +154,7 @@ class DeviceActionManager:
job_info.set_ready_timeout(10) # 设置10秒超时
self.active_jobs[device_key] = job_info
job_log = format_job_log(job_info.job_id, job_info.task_id, job_info.device_id, job_info.action_name)
logger.trace(f"[DeviceActionManager] Job {job_log} can start immediately for {device_key}")
logger.info(f"[DeviceActionManager] Job {job_log} can start immediately for {device_key}")
return True
def start_job(self, job_id: str) -> bool:
@@ -210,9 +210,8 @@ class DeviceActionManager:
job_info.update_timestamp()
# 从all_jobs中移除已结束的job
del self.all_jobs[job_id]
# job_log = format_job_log(job_info.job_id, job_info.task_id, job_info.device_id, job_info.action_name)
# logger.debug(f"[DeviceActionManager] Job {job_log} ended for {device_key}")
pass
job_log = format_job_log(job_info.job_id, job_info.task_id, job_info.device_id, job_info.action_name)
logger.info(f"[DeviceActionManager] Job {job_log} ended for {device_key}")
else:
job_log = format_job_log(job_info.job_id, job_info.task_id, job_info.device_id, job_info.action_name)
logger.warning(f"[DeviceActionManager] Job {job_log} was not active for {device_key}")
@@ -228,7 +227,7 @@ class DeviceActionManager:
next_job_log = format_job_log(
next_job.job_id, next_job.task_id, next_job.device_id, next_job.action_name
)
logger.trace(f"[DeviceActionManager] Next job {next_job_log} can start for {device_key}")
logger.info(f"[DeviceActionManager] Next job {next_job_log} can start for {device_key}")
return next_job
return None
@@ -269,7 +268,7 @@ class DeviceActionManager:
# 从all_jobs中移除
del self.all_jobs[job_id]
job_log = format_job_log(job_info.job_id, job_info.task_id, job_info.device_id, job_info.action_name)
logger.trace(f"[DeviceActionManager] Active job {job_log} cancelled for {device_key}")
logger.info(f"[DeviceActionManager] Active job {job_log} cancelled for {device_key}")
# 启动下一个任务
if device_key in self.device_queues and self.device_queues[device_key]:
@@ -282,7 +281,7 @@ class DeviceActionManager:
next_job_log = format_job_log(
next_job.job_id, next_job.task_id, next_job.device_id, next_job.action_name
)
logger.trace(f"[DeviceActionManager] Next job {next_job_log} can start after cancel")
logger.info(f"[DeviceActionManager] Next job {next_job_log} can start after cancel")
return True
# 如果是排队中的任务
@@ -296,7 +295,7 @@ class DeviceActionManager:
job_log = format_job_log(
job_info.job_id, job_info.task_id, job_info.device_id, job_info.action_name
)
logger.trace(f"[DeviceActionManager] Queued job {job_log} cancelled for {device_key}")
logger.info(f"[DeviceActionManager] Queued job {job_log} cancelled for {device_key}")
return True
job_log = format_job_log(job_info.job_id, job_info.task_id, job_info.device_id, job_info.action_name)
@@ -495,12 +494,8 @@ class MessageProcessor:
await self._process_message(message_type, message_data)
else:
if message_type.endswith("_material"):
logger.trace(
f"[MessageProcessor] 收到一条归属 {data.get('edge_session')} 的旧消息{data}"
)
logger.debug(
f"[MessageProcessor] 跳过了一条归属 {data.get('edge_session')} 的旧消息: {data.get('action')}"
)
logger.trace(f"[MessageProcessor] 收到一条归属 {data.get('edge_session')} 的旧消息:{data}")
logger.debug(f"[MessageProcessor] 跳过了一条归属 {data.get('edge_session')} 的旧消息: {data.get('action')}")
else:
await self._process_message(message_type, message_data)
except json.JSONDecodeError:
@@ -570,7 +565,7 @@ class MessageProcessor:
async def _process_message(self, message_type: str, message_data: Dict[str, Any]):
"""处理收到的消息"""
logger.trace(f"[MessageProcessor] Processing message: {message_type}")
logger.debug(f"[MessageProcessor] Processing message: {message_type}")
try:
if message_type == "pong":
@@ -642,13 +637,13 @@ class MessageProcessor:
await self._send_action_state_response(
device_id, action_name, task_id, job_id, "query_action_status", True, 0
)
logger.trace(f"[MessageProcessor] Job {job_log} can start immediately")
logger.info(f"[MessageProcessor] Job {job_log} can start immediately")
else:
# 需要排队
await self._send_action_state_response(
device_id, action_name, task_id, job_id, "query_action_status", False, 10
)
logger.trace(f"[MessageProcessor] Job {job_log} queued")
logger.info(f"[MessageProcessor] Job {job_log} queued")
# 通知QueueProcessor有新的队列更新
if self.queue_processor:
@@ -852,7 +847,9 @@ class MessageProcessor:
device_action_groups[key_add] = []
device_action_groups[key_add].append(item["uuid"])
logger.info(f"[资源同步] 跨站Transfer: {item['uuid'][:8]} from {device_old_id} to {device_id}")
logger.info(
f"[资源同步] 跨站Transfer: {item['uuid'][:8]} from {device_old_id} to {device_id}"
)
else:
# 正常update
key = (device_id, "update")
@@ -866,9 +863,7 @@ class MessageProcessor:
device_action_groups[key] = []
device_action_groups[key].append(item["uuid"])
logger.trace(
f"[资源同步] 动作 {action} 分组数量: {len(device_action_groups)}, 总数量: {len(resource_uuid_list)}"
)
logger.trace(f"[资源同步] 动作 {action} 分组数量: {len(device_action_groups)}, 总数量: {len(resource_uuid_list)}")
# 为每个(device_id, action)创建独立的更新线程
for (device_id, actual_action), items in device_action_groups.items():
@@ -916,13 +911,13 @@ class MessageProcessor:
# 发送确认消息
if self.websocket_client:
await self.websocket_client.send_message(
{"action": "restart_acknowledged", "data": {"reason": reason, "delay": delay}}
)
await self.websocket_client.send_message({
"action": "restart_acknowledged",
"data": {"reason": reason, "delay": delay}
})
# 设置全局重启标志
import unilabos.app.main as main_module
main_module._restart_requested = True
main_module._restart_reason = reason
@@ -932,12 +927,10 @@ class MessageProcessor:
# 在新线程中执行清理,避免阻塞当前事件循环
def do_cleanup():
import time
time.sleep(0.5) # 给当前消息处理完成的时间
logger.info(f"[MessageProcessor] Starting cleanup for restart, reason: {reason}")
try:
from unilabos.app.utils import cleanup_for_restart
if cleanup_for_restart():
logger.info("[MessageProcessor] Cleanup successful, main() will restart")
else:
@@ -1135,7 +1128,7 @@ class QueueProcessor:
success = self.message_processor.send_message(message)
job_log = format_job_log(job_info.job_id, job_info.task_id, job_info.device_id, job_info.action_name)
if success:
logger.trace(f"[QueueProcessor] Sent busy/need_more for queued job {job_log}")
logger.debug(f"[QueueProcessor] Sent busy/need_more for queued job {job_log}")
else:
logger.warning(f"[QueueProcessor] Failed to send busy status for job {job_log}")
@@ -1158,7 +1151,7 @@ class QueueProcessor:
job_info.action_name,
)
logger.trace(f"[QueueProcessor] Job {job_log} completed with status: {status}")
logger.info(f"[QueueProcessor] Job {job_log} completed with status: {status}")
# 结束任务,获取下一个可执行的任务
next_job = self.device_manager.end_job(job_id)
@@ -1178,8 +1171,8 @@ class QueueProcessor:
},
}
self.message_processor.send_message(message)
# next_job_log = format_job_log(next_job.job_id, next_job.task_id, next_job.device_id, next_job.action_name)
# logger.debug(f"[QueueProcessor] Notified next job {next_job_log} can start")
next_job_log = format_job_log(next_job.job_id, next_job.task_id, next_job.device_id, next_job.action_name)
logger.info(f"[QueueProcessor] Notified next job {next_job_log} can start")
# 立即触发下一轮状态检查
self.notify_queue_update()
@@ -1321,7 +1314,7 @@ class WebSocketClient(BaseCommunicationClient):
except (KeyError, AttributeError):
logger.warning(f"[WebSocketClient] Failed to remove job {item.job_id} from HostNode status")
# logger.debug(f"[WebSocketClient] Intercepting final status for job_id: {item.job_id} - {status}")
logger.info(f"[WebSocketClient] Intercepting final status for job_id: {item.job_id} - {status}")
# 通知队列处理器job完成包括timeout的job
self.queue_processor.handle_job_completed(item.job_id, status)
@@ -1388,9 +1381,7 @@ class WebSocketClient(BaseCommunicationClient):
if host_node:
# 获取设备信息
for device_id, namespace in host_node.devices_names.items():
device_key = (
f"{namespace}/{device_id}" if namespace.startswith("/") else f"/{namespace}/{device_id}"
)
device_key = f"{namespace}/{device_id}" if namespace.startswith("/") else f"/{namespace}/{device_id}"
is_online = device_key in host_node._online_devices
# 获取设备的动作信息
@@ -1404,16 +1395,14 @@ class WebSocketClient(BaseCommunicationClient):
"action_type": str(type(client).__name__),
}
devices.append(
{
"device_id": device_id,
"namespace": namespace,
"device_key": device_key,
"is_online": is_online,
"machine_name": host_node.device_machine_names.get(device_id, machine_name),
"actions": actions,
}
)
devices.append({
"device_id": device_id,
"namespace": namespace,
"device_key": device_key,
"is_online": is_online,
"machine_name": host_node.device_machine_names.get(device_id, machine_name),
"actions": actions,
})
logger.info(f"[WebSocketClient] Collected {len(devices)} devices for host_ready")
except Exception as e:

View File

@@ -22,7 +22,6 @@ class BasicConfig:
startup_json_path = None # 填写绝对路径
disable_browser = False # 禁止浏览器自动打开
port = 8002 # 本地HTTP服务
check_mode = False # CI 检查模式,用于验证 registry 导入和文件一致性
# 'TRACE', 'DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL'
log_level: Literal["TRACE", "DEBUG", "INFO", "WARNING", "ERROR", "CRITICAL"] = "DEBUG"

File diff suppressed because it is too large Load Diff

View File

@@ -1,687 +0,0 @@
"""
Virtual Workbench Device - 模拟工作台设备
包含:
- 1个机械臂 (每次操作3s, 独占锁)
- 3个加热台 (每次加热10s, 可并行)
工作流程:
1. A1-A5 物料同时启动,竞争机械臂
2. 机械臂将物料移动到空闲加热台
3. 加热完成后机械臂将物料移动到C1-C5
注意:调用来自线程池,使用 threading.Lock 进行同步
"""
import logging
import time
from typing import Dict, Any, Optional
from dataclasses import dataclass
from enum import Enum
from threading import Lock, RLock
from typing_extensions import TypedDict
from unilabos.ros.nodes.base_device_node import BaseROS2DeviceNode
from unilabos.utils.decorator import not_action
# ============ TypedDict 返回类型定义 ============
class MoveToHeatingStationResult(TypedDict):
"""move_to_heating_station 返回类型"""
success: bool
station_id: int
material_id: str
material_number: int
message: str
class StartHeatingResult(TypedDict):
"""start_heating 返回类型"""
success: bool
station_id: int
material_id: str
material_number: int
message: str
class MoveToOutputResult(TypedDict):
"""move_to_output 返回类型"""
success: bool
station_id: int
material_id: str
class PrepareMaterialsResult(TypedDict):
"""prepare_materials 返回类型 - 批量准备物料"""
success: bool
count: int
material_1: int # 物料编号1
material_2: int # 物料编号2
material_3: int # 物料编号3
material_4: int # 物料编号4
material_5: int # 物料编号5
message: str
# ============ 状态枚举 ============
class HeatingStationState(Enum):
"""加热台状态枚举"""
IDLE = "idle" # 空闲
OCCUPIED = "occupied" # 已放置物料,等待加热
HEATING = "heating" # 加热中
COMPLETED = "completed" # 加热完成,等待取走
class ArmState(Enum):
"""机械臂状态枚举"""
IDLE = "idle" # 空闲
BUSY = "busy" # 工作中
@dataclass
class HeatingStation:
"""加热台数据结构"""
station_id: int
state: HeatingStationState = HeatingStationState.IDLE
current_material: Optional[str] = None # 当前物料 (如 "A1", "A2")
material_number: Optional[int] = None # 物料编号 (1-5)
heating_start_time: Optional[float] = None
heating_progress: float = 0.0
class VirtualWorkbench:
"""
Virtual Workbench Device - 虚拟工作台设备
模拟一个包含1个机械臂和3个加热台的工作站
- 机械臂操作耗时3秒同一时间只能执行一个操作
- 加热台加热耗时10秒3个加热台可并行工作
工作流:
1. 物料A1-A5并发启动线程池竞争机械臂使用权
2. 获取机械臂后,查找空闲加热台
3. 机械臂将物料放入加热台,开始加热
4. 加热完成后机械臂将物料移动到目标位置Cn
"""
_ros_node: BaseROS2DeviceNode
# 配置常量
ARM_OPERATION_TIME: float = 3.0 # 机械臂操作时间(秒)
HEATING_TIME: float = 10.0 # 加热时间(秒)
NUM_HEATING_STATIONS: int = 3 # 加热台数量
def __init__(self, device_id: Optional[str] = None, config: Optional[Dict[str, Any]] = None, **kwargs):
# 处理可能的不同调用方式
if device_id is None and "id" in kwargs:
device_id = kwargs.pop("id")
if config is None and "config" in kwargs:
config = kwargs.pop("config")
self.device_id = device_id or "virtual_workbench"
self.config = config or {}
self.logger = logging.getLogger(f"VirtualWorkbench.{self.device_id}")
self.data: Dict[str, Any] = {}
# 从config中获取可配置参数
self.ARM_OPERATION_TIME = float(self.config.get("arm_operation_time", 3.0))
self.HEATING_TIME = float(self.config.get("heating_time", 10.0))
self.NUM_HEATING_STATIONS = int(self.config.get("num_heating_stations", 3))
# 机械臂状态和锁 (使用threading.Lock)
self._arm_lock = Lock()
self._arm_state = ArmState.IDLE
self._arm_current_task: Optional[str] = None
# 加热台状态 (station_id -> HeatingStation) - 立即初始化不依赖initialize()
self._heating_stations: Dict[int, HeatingStation] = {
i: HeatingStation(station_id=i)
for i in range(1, self.NUM_HEATING_STATIONS + 1)
}
self._stations_lock = RLock() # 可重入锁,保护加热台状态
# 任务追踪
self._active_tasks: Dict[str, Dict[str, Any]] = {} # material_id -> task_info
self._tasks_lock = Lock()
# 处理其他kwargs参数
skip_keys = {"arm_operation_time", "heating_time", "num_heating_stations"}
for key, value in kwargs.items():
if key not in skip_keys and not hasattr(self, key):
setattr(self, key, value)
self.logger.info(f"=== 虚拟工作台 {self.device_id} 已创建 ===")
self.logger.info(
f"机械臂操作时间: {self.ARM_OPERATION_TIME}s | "
f"加热时间: {self.HEATING_TIME}s | "
f"加热台数量: {self.NUM_HEATING_STATIONS}"
)
@not_action
def post_init(self, ros_node: BaseROS2DeviceNode):
"""ROS节点初始化后回调"""
self._ros_node = ros_node
@not_action
def initialize(self) -> bool:
"""初始化虚拟工作台"""
self.logger.info(f"初始化虚拟工作台 {self.device_id}")
# 重置加热台状态 (已在__init__中创建这里重置为初始状态)
with self._stations_lock:
for station in self._heating_stations.values():
station.state = HeatingStationState.IDLE
station.current_material = None
station.material_number = None
station.heating_progress = 0.0
# 初始化状态
self.data.update({
"status": "Ready",
"arm_state": ArmState.IDLE.value,
"arm_current_task": None,
"heating_stations": self._get_stations_status(),
"active_tasks_count": 0,
"message": "工作台就绪",
})
self.logger.info(f"工作台初始化完成: {self.NUM_HEATING_STATIONS}个加热台就绪")
return True
@not_action
def cleanup(self) -> bool:
"""清理虚拟工作台"""
self.logger.info(f"清理虚拟工作台 {self.device_id}")
self._arm_state = ArmState.IDLE
self._arm_current_task = None
with self._stations_lock:
self._heating_stations.clear()
with self._tasks_lock:
self._active_tasks.clear()
self.data.update({
"status": "Offline",
"arm_state": ArmState.IDLE.value,
"heating_stations": {},
"message": "工作台已关闭",
})
return True
def _get_stations_status(self) -> Dict[int, Dict[str, Any]]:
"""获取所有加热台状态"""
with self._stations_lock:
return {
station_id: {
"state": station.state.value,
"current_material": station.current_material,
"material_number": station.material_number,
"heating_progress": station.heating_progress,
}
for station_id, station in self._heating_stations.items()
}
def _update_data_status(self, message: Optional[str] = None):
"""更新状态数据"""
self.data.update({
"arm_state": self._arm_state.value,
"arm_current_task": self._arm_current_task,
"heating_stations": self._get_stations_status(),
"active_tasks_count": len(self._active_tasks),
})
if message:
self.data["message"] = message
def _find_available_heating_station(self) -> Optional[int]:
"""查找空闲的加热台
Returns:
空闲加热台ID如果没有则返回None
"""
with self._stations_lock:
for station_id, station in self._heating_stations.items():
if station.state == HeatingStationState.IDLE:
return station_id
return None
def _acquire_arm(self, task_description: str) -> bool:
"""获取机械臂使用权(阻塞直到获取)
Args:
task_description: 任务描述,用于日志
Returns:
是否成功获取
"""
self.logger.info(f"[{task_description}] 等待获取机械臂...")
# 阻塞等待获取锁
self._arm_lock.acquire()
self._arm_state = ArmState.BUSY
self._arm_current_task = task_description
self._update_data_status(f"机械臂执行: {task_description}")
self.logger.info(f"[{task_description}] 成功获取机械臂使用权")
return True
def _release_arm(self):
"""释放机械臂"""
task = self._arm_current_task
self._arm_state = ArmState.IDLE
self._arm_current_task = None
self._arm_lock.release()
self._update_data_status(f"机械臂已释放 (完成: {task})")
self.logger.info(f"机械臂已释放 (完成: {task})")
def prepare_materials(
self,
count: int = 5,
) -> PrepareMaterialsResult:
"""
批量准备物料 - 虚拟起始节点
作为工作流的起始节点,生成指定数量的物料编号供后续节点使用。
输出5个handle (material_1 ~ material_5)分别对应实验1~5。
Args:
count: 待生成的物料数量默认5 (生成 A1-A5)
Returns:
PrepareMaterialsResult: 包含 material_1 ~ material_5 用于传递给 move_to_heating_station
"""
# 生成物料列表 A1 - A{count}
materials = [i for i in range(1, count + 1)]
self.logger.info(
f"[准备物料] 生成 {count} 个物料: "
f"A1-A{count} -> material_1~material_{count}"
)
return {
"success": True,
"count": count,
"material_1": materials[0] if len(materials) > 0 else 0,
"material_2": materials[1] if len(materials) > 1 else 0,
"material_3": materials[2] if len(materials) > 2 else 0,
"material_4": materials[3] if len(materials) > 3 else 0,
"material_5": materials[4] if len(materials) > 4 else 0,
"message": f"已准备 {count} 个物料: A1-A{count}",
}
def move_to_heating_station(
self,
material_number: int,
) -> MoveToHeatingStationResult:
"""
将物料从An位置移动到加热台
多线程并发调用时,会竞争机械臂使用权,并自动查找空闲加热台
Args:
material_number: 物料编号 (1-5)
Returns:
MoveToHeatingStationResult: 包含 station_id, material_number 等用于传递给下一个节点
"""
# 根据物料编号生成物料ID
material_id = f"A{material_number}"
task_desc = f"移动{material_id}到加热台"
self.logger.info(f"[任务] {task_desc} - 开始执行")
# 记录任务
with self._tasks_lock:
self._active_tasks[material_id] = {
"status": "waiting_for_arm",
"start_time": time.time(),
}
try:
# 步骤1: 等待获取机械臂使用权(竞争)
with self._tasks_lock:
self._active_tasks[material_id]["status"] = "waiting_for_arm"
self._acquire_arm(task_desc)
# 步骤2: 查找空闲加热台
with self._tasks_lock:
self._active_tasks[material_id]["status"] = "finding_station"
station_id = None
# 循环等待直到找到空闲加热台
while station_id is None:
station_id = self._find_available_heating_station()
if station_id is None:
self.logger.info(f"[{material_id}] 没有空闲加热台,等待中...")
# 释放机械臂,等待后重试
self._release_arm()
time.sleep(0.5)
self._acquire_arm(task_desc)
# 步骤3: 占用加热台 - 立即标记为OCCUPIED防止其他任务选择同一加热台
with self._stations_lock:
self._heating_stations[station_id].state = HeatingStationState.OCCUPIED
self._heating_stations[station_id].current_material = material_id
self._heating_stations[station_id].material_number = material_number
# 步骤4: 模拟机械臂移动操作 (3秒)
with self._tasks_lock:
self._active_tasks[material_id]["status"] = "arm_moving"
self._active_tasks[material_id]["assigned_station"] = station_id
self.logger.info(f"[{material_id}] 机械臂正在移动到加热台{station_id}...")
time.sleep(self.ARM_OPERATION_TIME)
# 步骤5: 放入加热台完成
self._update_data_status(f"{material_id}已放入加热台{station_id}")
self.logger.info(f"[{material_id}] 已放入加热台{station_id} (用时{self.ARM_OPERATION_TIME}s)")
# 释放机械臂
self._release_arm()
with self._tasks_lock:
self._active_tasks[material_id]["status"] = "placed_on_station"
return {
"success": True,
"station_id": station_id,
"material_id": material_id,
"material_number": material_number,
"message": f"{material_id}已成功移动到加热台{station_id}",
}
except Exception as e:
self.logger.error(f"[{material_id}] 移动失败: {str(e)}")
if self._arm_lock.locked():
self._release_arm()
return {
"success": False,
"station_id": -1,
"material_id": material_id,
"material_number": material_number,
"message": f"移动失败: {str(e)}",
}
def start_heating(
self,
station_id: int,
material_number: int,
) -> StartHeatingResult:
"""
启动指定加热台的加热程序
Args:
station_id: 加热台ID (1-3),从 move_to_heating_station 的 handle 传入
material_number: 物料编号,从 move_to_heating_station 的 handle 传入
Returns:
StartHeatingResult: 包含 station_id, material_number 等用于传递给下一个节点
"""
self.logger.info(f"[加热台{station_id}] 开始加热")
if station_id not in self._heating_stations:
return {
"success": False,
"station_id": station_id,
"material_id": "",
"material_number": material_number,
"message": f"无效的加热台ID: {station_id}",
}
with self._stations_lock:
station = self._heating_stations[station_id]
if station.current_material is None:
return {
"success": False,
"station_id": station_id,
"material_id": "",
"material_number": material_number,
"message": f"加热台{station_id}上没有物料",
}
if station.state == HeatingStationState.HEATING:
return {
"success": False,
"station_id": station_id,
"material_id": station.current_material,
"material_number": material_number,
"message": f"加热台{station_id}已经在加热中",
}
material_id = station.current_material
# 开始加热
station.state = HeatingStationState.HEATING
station.heating_start_time = time.time()
station.heating_progress = 0.0
with self._tasks_lock:
if material_id in self._active_tasks:
self._active_tasks[material_id]["status"] = "heating"
self._update_data_status(f"加热台{station_id}开始加热{material_id}")
# 模拟加热过程 (10秒)
start_time = time.time()
while True:
elapsed = time.time() - start_time
progress = min(100.0, (elapsed / self.HEATING_TIME) * 100)
with self._stations_lock:
self._heating_stations[station_id].heating_progress = progress
self._update_data_status(f"加热台{station_id}加热中: {progress:.1f}%")
if elapsed >= self.HEATING_TIME:
break
time.sleep(1.0)
# 加热完成
with self._stations_lock:
self._heating_stations[station_id].state = HeatingStationState.COMPLETED
self._heating_stations[station_id].heating_progress = 100.0
with self._tasks_lock:
if material_id in self._active_tasks:
self._active_tasks[material_id]["status"] = "heating_completed"
self._update_data_status(f"加热台{station_id}加热完成")
self.logger.info(f"[加热台{station_id}] {material_id}加热完成 (用时{self.HEATING_TIME}s)")
return {
"success": True,
"station_id": station_id,
"material_id": material_id,
"material_number": material_number,
"message": f"加热台{station_id}加热完成",
}
def move_to_output(
self,
station_id: int,
material_number: int,
) -> MoveToOutputResult:
"""
将物料从加热台移动到输出位置Cn
Args:
station_id: 加热台ID (1-3),从 start_heating 的 handle 传入
material_number: 物料编号,从 start_heating 的 handle 传入,用于确定输出位置 Cn
Returns:
MoveToOutputResult: 包含执行结果
"""
output_number = material_number # 物料编号决定输出位置
if station_id not in self._heating_stations:
return {
"success": False,
"station_id": station_id,
"material_id": "",
"output_position": f"C{output_number}",
"message": f"无效的加热台ID: {station_id}",
}
with self._stations_lock:
station = self._heating_stations[station_id]
material_id = station.current_material
if material_id is None:
return {
"success": False,
"station_id": station_id,
"material_id": "",
"output_position": f"C{output_number}",
"message": f"加热台{station_id}上没有物料",
}
if station.state != HeatingStationState.COMPLETED:
return {
"success": False,
"station_id": station_id,
"material_id": material_id,
"output_position": f"C{output_number}",
"message": f"加热台{station_id}尚未完成加热 (当前状态: {station.state.value})",
}
output_position = f"C{output_number}"
task_desc = f"从加热台{station_id}移动{material_id}{output_position}"
self.logger.info(f"[任务] {task_desc}")
try:
with self._tasks_lock:
if material_id in self._active_tasks:
self._active_tasks[material_id]["status"] = "waiting_for_arm_output"
# 获取机械臂
self._acquire_arm(task_desc)
with self._tasks_lock:
if material_id in self._active_tasks:
self._active_tasks[material_id]["status"] = "arm_moving_to_output"
# 模拟机械臂操作 (3秒)
self.logger.info(f"[{material_id}] 机械臂正在从加热台{station_id}取出并移动到{output_position}...")
time.sleep(self.ARM_OPERATION_TIME)
# 清空加热台
with self._stations_lock:
self._heating_stations[station_id].state = HeatingStationState.IDLE
self._heating_stations[station_id].current_material = None
self._heating_stations[station_id].material_number = None
self._heating_stations[station_id].heating_progress = 0.0
self._heating_stations[station_id].heating_start_time = None
# 释放机械臂
self._release_arm()
# 任务完成
with self._tasks_lock:
if material_id in self._active_tasks:
self._active_tasks[material_id]["status"] = "completed"
self._active_tasks[material_id]["end_time"] = time.time()
self._update_data_status(f"{material_id}已移动到{output_position}")
self.logger.info(f"[{material_id}] 已成功移动到{output_position} (用时{self.ARM_OPERATION_TIME}s)")
return {
"success": True,
"station_id": station_id,
"material_id": material_id,
"output_position": output_position,
"message": f"{material_id}已成功移动到{output_position}",
}
except Exception as e:
self.logger.error(f"移动到输出位置失败: {str(e)}")
if self._arm_lock.locked():
self._release_arm()
return {
"success": False,
"station_id": station_id,
"material_id": "",
"output_position": output_position,
"message": f"移动失败: {str(e)}",
}
# ============ 状态属性 ============
@property
def status(self) -> str:
return self.data.get("status", "Unknown")
@property
def arm_state(self) -> str:
return self._arm_state.value
@property
def arm_current_task(self) -> str:
return self._arm_current_task or ""
@property
def heating_station_1_state(self) -> str:
with self._stations_lock:
station = self._heating_stations.get(1)
return station.state.value if station else "unknown"
@property
def heating_station_1_material(self) -> str:
with self._stations_lock:
station = self._heating_stations.get(1)
return station.current_material or "" if station else ""
@property
def heating_station_1_progress(self) -> float:
with self._stations_lock:
station = self._heating_stations.get(1)
return station.heating_progress if station else 0.0
@property
def heating_station_2_state(self) -> str:
with self._stations_lock:
station = self._heating_stations.get(2)
return station.state.value if station else "unknown"
@property
def heating_station_2_material(self) -> str:
with self._stations_lock:
station = self._heating_stations.get(2)
return station.current_material or "" if station else ""
@property
def heating_station_2_progress(self) -> float:
with self._stations_lock:
station = self._heating_stations.get(2)
return station.heating_progress if station else 0.0
@property
def heating_station_3_state(self) -> str:
with self._stations_lock:
station = self._heating_stations.get(3)
return station.state.value if station else "unknown"
@property
def heating_station_3_material(self) -> str:
with self._stations_lock:
station = self._heating_stations.get(3)
return station.current_material or "" if station else ""
@property
def heating_station_3_progress(self) -> float:
with self._stations_lock:
station = self._heating_stations.get(3)
return station.heating_progress if station else 0.0
@property
def active_tasks_count(self) -> int:
with self._tasks_lock:
return len(self._active_tasks)
@property
def message(self) -> str:
return self.data.get("message", "")

View File

@@ -0,0 +1,589 @@
workstation.bioyond_dispensing_station:
category:
- workstation
- bioyond
class:
action_value_mappings:
auto-batch_create_90_10_vial_feeding_tasks:
feedback: {}
goal: {}
goal_default:
delay_time: null
hold_m_name: null
liquid_material_name: NMP
speed: null
temperature: null
titration: null
handles: {}
placeholder_keys: {}
result: {}
schema:
description: ''
properties:
feedback: {}
goal:
properties:
delay_time:
type: string
hold_m_name:
type: string
liquid_material_name:
default: NMP
type: string
speed:
type: string
temperature:
type: string
titration:
type: string
required:
- titration
type: object
result: {}
required:
- goal
title: batch_create_90_10_vial_feeding_tasks参数
type: object
type: UniLabJsonCommand
auto-batch_create_diamine_solution_tasks:
feedback: {}
goal: {}
goal_default:
delay_time: null
liquid_material_name: NMP
solutions: null
speed: null
temperature: null
handles: {}
placeholder_keys: {}
result: {}
schema:
description: ''
properties:
feedback: {}
goal:
properties:
delay_time:
type: string
liquid_material_name:
default: NMP
type: string
solutions:
type: string
speed:
type: string
temperature:
type: string
required:
- solutions
type: object
result: {}
required:
- goal
title: batch_create_diamine_solution_tasks参数
type: object
type: UniLabJsonCommand
auto-brief_step_parameters:
feedback: {}
goal: {}
goal_default:
data: null
handles: {}
placeholder_keys: {}
result: {}
schema:
description: ''
properties:
feedback: {}
goal:
properties:
data:
type: object
required:
- data
type: object
result: {}
required:
- goal
title: brief_step_parameters参数
type: object
type: UniLabJsonCommand
auto-compute_experiment_design:
feedback: {}
goal: {}
goal_default:
m_tot: '70'
ratio: null
titration_percent: '0.03'
wt_percent: '0.25'
handles: {}
placeholder_keys: {}
result: {}
schema:
description: ''
properties:
feedback: {}
goal:
properties:
m_tot:
default: '70'
type: string
ratio:
type: object
titration_percent:
default: '0.03'
type: string
wt_percent:
default: '0.25'
type: string
required:
- ratio
type: object
result:
properties:
feeding_order:
items: {}
title: Feeding Order
type: array
return_info:
title: Return Info
type: string
solutions:
items: {}
title: Solutions
type: array
solvents:
additionalProperties: true
title: Solvents
type: object
titration:
additionalProperties: true
title: Titration
type: object
required:
- solutions
- titration
- solvents
- feeding_order
- return_info
title: ComputeExperimentDesignReturn
type: object
required:
- goal
title: compute_experiment_design参数
type: object
type: UniLabJsonCommand
auto-process_order_finish_report:
feedback: {}
goal: {}
goal_default:
report_request: null
used_materials: null
handles: {}
placeholder_keys: {}
result: {}
schema:
description: ''
properties:
feedback: {}
goal:
properties:
report_request:
type: string
used_materials:
type: string
required:
- report_request
- used_materials
type: object
result: {}
required:
- goal
title: process_order_finish_report参数
type: object
type: UniLabJsonCommand
auto-project_order_report:
feedback: {}
goal: {}
goal_default:
order_id: null
handles: {}
placeholder_keys: {}
result: {}
schema:
description: ''
properties:
feedback: {}
goal:
properties:
order_id:
type: string
required:
- order_id
type: object
result: {}
required:
- goal
title: project_order_report参数
type: object
type: UniLabJsonCommand
auto-query_resource_by_name:
feedback: {}
goal: {}
goal_default:
material_name: null
handles: {}
placeholder_keys: {}
result: {}
schema:
description: ''
properties:
feedback: {}
goal:
properties:
material_name:
type: string
required:
- material_name
type: object
result: {}
required:
- goal
title: query_resource_by_name参数
type: object
type: UniLabJsonCommand
auto-transfer_materials_to_reaction_station:
feedback: {}
goal: {}
goal_default:
target_device_id: null
transfer_groups: null
handles: {}
placeholder_keys: {}
result: {}
schema:
description: ''
properties:
feedback: {}
goal:
properties:
target_device_id:
type: string
transfer_groups:
type: array
required:
- target_device_id
- transfer_groups
type: object
result: {}
required:
- goal
title: transfer_materials_to_reaction_station参数
type: object
type: UniLabJsonCommand
auto-wait_for_multiple_orders_and_get_reports:
feedback: {}
goal: {}
goal_default:
batch_create_result: null
check_interval: 10
timeout: 7200
handles: {}
placeholder_keys: {}
result: {}
schema:
description: ''
properties:
feedback: {}
goal:
properties:
batch_create_result:
type: string
check_interval:
default: 10
type: integer
timeout:
default: 7200
type: integer
required: []
type: object
result: {}
required:
- goal
title: wait_for_multiple_orders_and_get_reports参数
type: object
type: UniLabJsonCommand
auto-workflow_sample_locations:
feedback: {}
goal: {}
goal_default:
workflow_id: null
handles: {}
placeholder_keys: {}
result: {}
schema:
description: ''
properties:
feedback: {}
goal:
properties:
workflow_id:
type: string
required:
- workflow_id
type: object
result: {}
required:
- goal
title: workflow_sample_locations参数
type: object
type: UniLabJsonCommand
create_90_10_vial_feeding_task:
feedback: {}
goal:
delay_time: delay_time
hold_m_name: hold_m_name
order_name: order_name
percent_10_1_assign_material_name: percent_10_1_assign_material_name
percent_10_1_liquid_material_name: percent_10_1_liquid_material_name
percent_10_1_target_weigh: percent_10_1_target_weigh
percent_10_1_volume: percent_10_1_volume
percent_10_2_assign_material_name: percent_10_2_assign_material_name
percent_10_2_liquid_material_name: percent_10_2_liquid_material_name
percent_10_2_target_weigh: percent_10_2_target_weigh
percent_10_2_volume: percent_10_2_volume
percent_10_3_assign_material_name: percent_10_3_assign_material_name
percent_10_3_liquid_material_name: percent_10_3_liquid_material_name
percent_10_3_target_weigh: percent_10_3_target_weigh
percent_10_3_volume: percent_10_3_volume
percent_90_1_assign_material_name: percent_90_1_assign_material_name
percent_90_1_target_weigh: percent_90_1_target_weigh
percent_90_2_assign_material_name: percent_90_2_assign_material_name
percent_90_2_target_weigh: percent_90_2_target_weigh
percent_90_3_assign_material_name: percent_90_3_assign_material_name
percent_90_3_target_weigh: percent_90_3_target_weigh
speed: speed
temperature: temperature
goal_default:
delay_time: ''
hold_m_name: ''
order_name: ''
percent_10_1_assign_material_name: ''
percent_10_1_liquid_material_name: ''
percent_10_1_target_weigh: ''
percent_10_1_volume: ''
percent_10_2_assign_material_name: ''
percent_10_2_liquid_material_name: ''
percent_10_2_target_weigh: ''
percent_10_2_volume: ''
percent_10_3_assign_material_name: ''
percent_10_3_liquid_material_name: ''
percent_10_3_target_weigh: ''
percent_10_3_volume: ''
percent_90_1_assign_material_name: ''
percent_90_1_target_weigh: ''
percent_90_2_assign_material_name: ''
percent_90_2_target_weigh: ''
percent_90_3_assign_material_name: ''
percent_90_3_target_weigh: ''
speed: ''
temperature: ''
handles: {}
result:
return_info: return_info
schema:
description: ''
properties:
feedback:
properties: {}
required: []
title: DispenStationVialFeed_Feedback
type: object
goal:
properties:
delay_time:
type: string
hold_m_name:
type: string
order_name:
type: string
percent_10_1_assign_material_name:
type: string
percent_10_1_liquid_material_name:
type: string
percent_10_1_target_weigh:
type: string
percent_10_1_volume:
type: string
percent_10_2_assign_material_name:
type: string
percent_10_2_liquid_material_name:
type: string
percent_10_2_target_weigh:
type: string
percent_10_2_volume:
type: string
percent_10_3_assign_material_name:
type: string
percent_10_3_liquid_material_name:
type: string
percent_10_3_target_weigh:
type: string
percent_10_3_volume:
type: string
percent_90_1_assign_material_name:
type: string
percent_90_1_target_weigh:
type: string
percent_90_2_assign_material_name:
type: string
percent_90_2_target_weigh:
type: string
percent_90_3_assign_material_name:
type: string
percent_90_3_target_weigh:
type: string
speed:
type: string
temperature:
type: string
required:
- order_name
- percent_90_1_assign_material_name
- percent_90_1_target_weigh
- percent_90_2_assign_material_name
- percent_90_2_target_weigh
- percent_90_3_assign_material_name
- percent_90_3_target_weigh
- percent_10_1_assign_material_name
- percent_10_1_target_weigh
- percent_10_1_volume
- percent_10_1_liquid_material_name
- percent_10_2_assign_material_name
- percent_10_2_target_weigh
- percent_10_2_volume
- percent_10_2_liquid_material_name
- percent_10_3_assign_material_name
- percent_10_3_target_weigh
- percent_10_3_volume
- percent_10_3_liquid_material_name
- speed
- temperature
- delay_time
- hold_m_name
title: DispenStationVialFeed_Goal
type: object
result:
properties:
return_info:
type: string
required:
- return_info
title: DispenStationVialFeed_Result
type: object
required:
- goal
title: DispenStationVialFeed
type: object
type: DispenStationVialFeed
create_diamine_solution_task:
feedback: {}
goal:
delay_time: delay_time
hold_m_name: hold_m_name
liquid_material_name: liquid_material_name
material_name: material_name
order_name: order_name
speed: speed
target_weigh: target_weigh
temperature: temperature
volume: volume
goal_default:
delay_time: ''
hold_m_name: ''
liquid_material_name: ''
material_name: ''
order_name: ''
speed: ''
target_weigh: ''
temperature: ''
volume: ''
handles: {}
result:
return_info: return_info
schema:
description: ''
properties:
feedback:
properties: {}
required: []
title: DispenStationSolnPrep_Feedback
type: object
goal:
properties:
delay_time:
type: string
hold_m_name:
type: string
liquid_material_name:
type: string
material_name:
type: string
order_name:
type: string
speed:
type: string
target_weigh:
type: string
temperature:
type: string
volume:
type: string
required:
- order_name
- material_name
- target_weigh
- volume
- liquid_material_name
- speed
- temperature
- delay_time
- hold_m_name
title: DispenStationSolnPrep_Goal
type: object
result:
properties:
return_info:
type: string
required:
- return_info
title: DispenStationSolnPrep_Result
type: object
required:
- goal
title: DispenStationSolnPrep
type: object
type: DispenStationSolnPrep
module: unilabos.devices.workstation.bioyond_studio.dispensing_station:BioyondDispensingStation
status_types: {}
type: python
config_info: []
description: ''
handles: []
icon: ''
init_param_schema:
config:
properties:
config:
type: string
deck:
type: string
required:
- config
- deck
type: object
data:
properties: {}
required: []
type: object
version: 1.0.0

File diff suppressed because it is too large Load Diff

View File

@@ -5,135 +5,6 @@ bioyond_dispensing_station:
- bioyond_dispensing_station
class:
action_value_mappings:
auto-brief_step_parameters:
feedback: {}
goal: {}
goal_default:
data: null
handles: {}
placeholder_keys: {}
result: {}
schema:
description: ''
properties:
feedback: {}
goal:
properties:
data:
type: object
required:
- data
type: object
result: {}
required:
- goal
title: brief_step_parameters参数
type: object
type: UniLabJsonCommand
auto-process_order_finish_report:
feedback: {}
goal: {}
goal_default:
report_request: null
used_materials: null
handles: {}
placeholder_keys: {}
result: {}
schema:
description: ''
properties:
feedback: {}
goal:
properties:
report_request:
type: string
used_materials:
type: string
required:
- report_request
- used_materials
type: object
result: {}
required:
- goal
title: process_order_finish_report参数
type: object
type: UniLabJsonCommand
auto-project_order_report:
feedback: {}
goal: {}
goal_default:
order_id: null
handles: {}
placeholder_keys: {}
result: {}
schema:
description: ''
properties:
feedback: {}
goal:
properties:
order_id:
type: string
required:
- order_id
type: object
result: {}
required:
- goal
title: project_order_report参数
type: object
type: UniLabJsonCommand
auto-query_resource_by_name:
feedback: {}
goal: {}
goal_default:
material_name: null
handles: {}
placeholder_keys: {}
result: {}
schema:
description: ''
properties:
feedback: {}
goal:
properties:
material_name:
type: string
required:
- material_name
type: object
result: {}
required:
- goal
title: query_resource_by_name参数
type: object
type: UniLabJsonCommand
auto-workflow_sample_locations:
feedback: {}
goal: {}
goal_default:
workflow_id: null
handles: {}
placeholder_keys: {}
result: {}
schema:
description: ''
properties:
feedback: {}
goal:
properties:
workflow_id:
type: string
required:
- workflow_id
type: object
result: {}
required:
- goal
title: workflow_sample_locations参数
type: object
type: UniLabJsonCommand
batch_create_90_10_vial_feeding_tasks:
feedback: {}
goal:

View File

@@ -405,7 +405,7 @@ coincellassemblyworkstation_device:
goal:
properties:
bottle_num:
type: string
type: integer
required:
- bottle_num
type: object

View File

@@ -49,7 +49,32 @@ opcua_example:
title: load_config参数
type: object
type: UniLabJsonCommand
auto-refresh_node_values:
auto-post_init:
feedback: {}
goal: {}
goal_default:
ros_node: null
handles: {}
placeholder_keys: {}
result: {}
schema:
description: ''
properties:
feedback: {}
goal:
properties:
ros_node:
type: string
required:
- ros_node
type: object
result: {}
required:
- goal
title: post_init参数
type: object
type: UniLabJsonCommand
auto-print_cache_stats:
feedback: {}
goal: {}
goal_default: {}
@@ -67,7 +92,32 @@ opcua_example:
result: {}
required:
- goal
title: refresh_node_values参数
title: print_cache_stats参数
type: object
type: UniLabJsonCommand
auto-read_node:
feedback: {}
goal: {}
goal_default:
node_name: null
handles: {}
placeholder_keys: {}
result: {}
schema:
description: ''
properties:
feedback: {}
goal:
properties:
node_name:
type: string
required:
- node_name
type: object
result: {}
required:
- goal
title: read_node参数
type: object
type: UniLabJsonCommand
auto-set_node_value:
@@ -99,50 +149,9 @@ opcua_example:
title: set_node_value参数
type: object
type: UniLabJsonCommand
auto-start_node_refresh:
feedback: {}
goal: {}
goal_default: {}
handles: {}
placeholder_keys: {}
result: {}
schema:
description: ''
properties:
feedback: {}
goal:
properties: {}
required: []
type: object
result: {}
required:
- goal
title: start_node_refresh参数
type: object
type: UniLabJsonCommand
auto-stop_node_refresh:
feedback: {}
goal: {}
goal_default: {}
handles: {}
placeholder_keys: {}
result: {}
schema:
description: ''
properties:
feedback: {}
goal:
properties: {}
required: []
type: object
result: {}
required:
- goal
title: stop_node_refresh参数
type: object
type: UniLabJsonCommand
module: unilabos.device_comms.opcua_client.client:OpcUaClient
status_types:
cache_stats: dict
node_value: String
type: python
config_info: []
@@ -152,15 +161,23 @@ opcua_example:
init_param_schema:
config:
properties:
cache_timeout:
default: 5.0
type: number
config_path:
type: string
deck:
type: string
password:
type: string
refresh_interval:
default: 1.0
type: number
subscription_interval:
default: 500
type: integer
url:
type: string
use_subscription:
default: true
type: boolean
username:
type: string
required:
@@ -168,9 +185,12 @@ opcua_example:
type: object
data:
properties:
cache_stats:
type: object
node_value:
type: string
required:
- node_value
- cache_stats
type: object
version: 1.0.0

View File

@@ -58,313 +58,6 @@ reaction_station.bioyond:
title: add_time_constraint参数
type: object
type: UniLabJsonCommand
auto-clear_workflows:
feedback: {}
goal: {}
goal_default: {}
handles: {}
placeholder_keys: {}
result: {}
schema:
description: ''
properties:
feedback: {}
goal:
properties: {}
required: []
type: object
result: {}
required:
- goal
title: clear_workflows参数
type: object
type: UniLabJsonCommand
auto-create_order:
feedback: {}
goal: {}
goal_default:
json_str: null
handles: {}
placeholder_keys: {}
result: {}
schema:
description: ''
properties:
feedback: {}
goal:
properties:
json_str:
type: string
required:
- json_str
type: object
result: {}
required:
- goal
title: create_order参数
type: object
type: UniLabJsonCommand
auto-hard_delete_merged_workflows:
feedback: {}
goal: {}
goal_default:
workflow_ids: null
handles: {}
placeholder_keys: {}
result: {}
schema:
description: ''
properties:
feedback: {}
goal:
properties:
workflow_ids:
items:
type: string
type: array
required:
- workflow_ids
type: object
result: {}
required:
- goal
title: hard_delete_merged_workflows参数
type: object
type: UniLabJsonCommand
auto-merge_workflow_with_parameters:
feedback: {}
goal: {}
goal_default:
json_str: null
handles: {}
placeholder_keys: {}
result: {}
schema:
description: ''
properties:
feedback: {}
goal:
properties:
json_str:
type: string
required:
- json_str
type: object
result: {}
required:
- goal
title: merge_workflow_with_parameters参数
type: object
type: UniLabJsonCommand
auto-process_temperature_cutoff_report:
feedback: {}
goal: {}
goal_default:
report_request: null
handles: {}
placeholder_keys: {}
result: {}
schema:
description: ''
properties:
feedback: {}
goal:
properties:
report_request:
type: string
required:
- report_request
type: object
result: {}
required:
- goal
title: process_temperature_cutoff_report参数
type: object
type: UniLabJsonCommand
auto-process_web_workflows:
feedback: {}
goal: {}
goal_default:
web_workflow_json: null
handles: {}
placeholder_keys: {}
result: {}
schema:
description: ''
properties:
feedback: {}
goal:
properties:
web_workflow_json:
type: string
required:
- web_workflow_json
type: object
result: {}
required:
- goal
title: process_web_workflows参数
type: object
type: UniLabJsonCommand
auto-set_reactor_temperature:
feedback: {}
goal: {}
goal_default:
reactor_id: null
temperature: null
handles: {}
placeholder_keys: {}
result: {}
schema:
description: ''
properties:
feedback: {}
goal:
properties:
reactor_id:
type: integer
temperature:
type: number
required:
- reactor_id
- temperature
type: object
result: {}
required:
- goal
title: set_reactor_temperature参数
type: object
type: UniLabJsonCommand
auto-skip_titration_steps:
feedback: {}
goal: {}
goal_default:
preintake_id: null
handles: {}
placeholder_keys: {}
result: {}
schema:
description: ''
properties:
feedback: {}
goal:
properties:
preintake_id:
type: string
required:
- preintake_id
type: object
result: {}
required:
- goal
title: skip_titration_steps参数
type: object
type: UniLabJsonCommand
auto-sync_workflow_sequence_from_bioyond:
feedback: {}
goal: {}
goal_default: {}
handles: {}
placeholder_keys: {}
result: {}
schema:
description: ''
properties:
feedback: {}
goal:
properties: {}
required: []
type: object
result: {}
required:
- goal
title: sync_workflow_sequence_from_bioyond参数
type: object
type: UniLabJsonCommand
auto-wait_for_multiple_orders_and_get_reports:
feedback: {}
goal: {}
goal_default:
batch_create_result: null
check_interval: 10
timeout: 7200
handles: {}
placeholder_keys: {}
result: {}
schema:
description: ''
properties:
feedback: {}
goal:
properties:
batch_create_result:
type: string
check_interval:
default: 10
type: integer
timeout:
default: 7200
type: integer
required: []
type: object
result: {}
required:
- goal
title: wait_for_multiple_orders_and_get_reports参数
type: object
type: UniLabJsonCommand
auto-workflow_sequence:
feedback: {}
goal: {}
goal_default:
value: null
handles: {}
placeholder_keys: {}
result: {}
schema:
description: ''
properties:
feedback: {}
goal:
properties:
value:
items:
type: string
type: array
required:
- value
type: object
result: {}
required:
- goal
title: workflow_sequence参数
type: object
type: UniLabJsonCommand
auto-workflow_step_query:
feedback: {}
goal: {}
goal_default:
workflow_id: null
handles: {}
placeholder_keys: {}
result: {}
schema:
description: ''
properties:
feedback: {}
goal:
properties:
workflow_id:
type: string
required:
- workflow_id
type: object
result: {}
required:
- goal
title: workflow_step_query参数
type: object
type: UniLabJsonCommand
clean_all_server_workflows:
feedback: {}
goal: {}
@@ -981,7 +674,17 @@ reaction_station.bioyond:
module: unilabos.devices.workstation.bioyond_studio.reaction_station.reaction_station:BioyondReactionStation
protocol_type: []
status_types:
workflow_sequence: str
average_viscosity: Float64
force: Float64
in_temperature: Float64
out_temperature: Float64
pt100_temperature: Float64
sensor_average_temperature: Float64
setting_temperature: Float64
speed: Float64
target_temperature: Float64
viscosity: Float64
workflow_sequence: String
type: python
config_info: []
description: Bioyond反应站
@@ -1001,7 +704,9 @@ reaction_station.bioyond:
data:
properties:
workflow_sequence:
type: string
items:
type: string
type: array
required:
- workflow_sequence
type: object
@@ -1011,34 +716,19 @@ reaction_station.reactor:
- reactor
- reaction_station_bioyond
class:
action_value_mappings:
auto-update_metrics:
feedback: {}
goal: {}
goal_default:
payload: null
handles: {}
placeholder_keys: {}
result: {}
schema:
description: ''
properties:
feedback: {}
goal:
properties:
payload:
type: object
required:
- payload
type: object
result: {}
required:
- goal
title: update_metrics参数
type: object
type: UniLabJsonCommand
action_value_mappings: {}
module: unilabos.devices.workstation.bioyond_studio.reaction_station.reaction_station:BioyondReactor
status_types: {}
status_types:
average_viscosity: Float64
force: Float64
in_temperature: Float64
out_temperature: Float64
pt100_temperature: Float64
sensor_average_temperature: Float64
setting_temperature: Float64
speed: Float64
target_temperature: Float64
viscosity: Float64
type: python
config_info: []
description: 反应站子设备-反应器

File diff suppressed because it is too large Load Diff

View File

@@ -71,20 +71,6 @@ class Registry:
from unilabos.app.web.utils.action_utils import get_yaml_from_goal_type
# 获取 HostNode 类的增强信息,用于自动生成 action schema
host_node_enhanced_info = get_enhanced_class_info(
"unilabos.ros.nodes.presets.host_node:HostNode", use_dynamic=True
)
# 为 test_latency 生成 schema保留原有 description
test_latency_method_info = host_node_enhanced_info.get("action_methods", {}).get("test_latency", {})
test_latency_schema = self._generate_unilab_json_command_schema(
test_latency_method_info.get("args", []),
"test_latency",
test_latency_method_info.get("return_annotation"),
)
test_latency_schema["description"] = "用于测试延迟的动作,返回延迟时间和时间差。"
self.device_type_registry.update(
{
"host_node": {
@@ -166,19 +152,14 @@ class Registry:
},
},
"test_latency": {
"type": (
"UniLabJsonCommandAsync"
if test_latency_method_info.get("is_async", False)
else "UniLabJsonCommand"
),
"type": self.EmptyIn,
"goal": {},
"feedback": {},
"result": {},
"schema": test_latency_schema,
"goal_default": {
arg["name"]: arg["default"]
for arg in test_latency_method_info.get("args", [])
},
"schema": ros_action_to_json_schema(
self.EmptyIn, "用于测试延迟的动作,返回延迟时间和时间差。"
),
"goal_default": {},
"handles": {},
},
"auto-test_resource": {
@@ -498,11 +479,7 @@ class Registry:
return status_schema
def _generate_unilab_json_command_schema(
self,
method_args: List[Dict[str, Any]],
method_name: str,
return_annotation: Any = None,
previous_schema: Dict[str, Any] | None = None,
self, method_args: List[Dict[str, Any]], method_name: str, return_annotation: Any = None
) -> Dict[str, Any]:
"""
根据UniLabJsonCommand方法信息生成JSON Schema暂不支持嵌套类型
@@ -511,7 +488,6 @@ class Registry:
method_args: 方法信息字典包含args等
method_name: 方法名称
return_annotation: 返回类型注解用于生成result schema仅支持TypedDict
previous_schema: 之前的 schema用于保留 goal/feedback/result 下一级字段的 description
Returns:
JSON Schema格式的参数schema
@@ -545,7 +521,7 @@ class Registry:
if return_annotation is not None and self._is_typed_dict(return_annotation):
result_schema = self._generate_typed_dict_result_schema(return_annotation)
final_schema = {
return {
"title": f"{method_name}参数",
"description": f"",
"type": "object",
@@ -553,40 +529,6 @@ class Registry:
"required": ["goal"],
}
# 保留之前 schema 中 goal/feedback/result 下一级字段的 description
if previous_schema:
self._preserve_field_descriptions(final_schema, previous_schema)
return final_schema
def _preserve_field_descriptions(self, new_schema: Dict[str, Any], previous_schema: Dict[str, Any]) -> None:
"""
保留之前 schema 中 goal/feedback/result 下一级字段的 description 和 title
Args:
new_schema: 新生成的 schema会被修改
previous_schema: 之前的 schema
"""
for section in ["goal", "feedback", "result"]:
new_section = new_schema.get("properties", {}).get(section, {})
prev_section = previous_schema.get("properties", {}).get(section, {})
if not new_section or not prev_section:
continue
new_props = new_section.get("properties", {})
prev_props = prev_section.get("properties", {})
for field_name, field_schema in new_props.items():
if field_name in prev_props:
prev_field = prev_props[field_name]
# 保留字段的 description
if "description" in prev_field and prev_field["description"]:
field_schema["description"] = prev_field["description"]
# 保留字段的 title用户自定义的中文名
if "title" in prev_field and prev_field["title"]:
field_schema["title"] = prev_field["title"]
def _is_typed_dict(self, annotation: Any) -> bool:
"""
检查类型注解是否是TypedDict
@@ -755,10 +697,13 @@ class Registry:
sorted(device_config["class"]["status_types"].items())
)
if complete_registry:
# 保存原有的 action 配置(用于保留 schema 的 description 和 handles 等)
old_action_configs = {}
# 保存原有的description信息
old_descriptions = {}
for action_name, action_config in device_config["class"]["action_value_mappings"].items():
old_action_configs[action_name] = action_config
if "description" in action_config.get("schema", {}):
description = action_config["schema"]["description"]
if len(description):
old_descriptions[action_name] = action_config["schema"]["description"]
device_config["class"]["action_value_mappings"] = {
k: v
@@ -774,15 +719,10 @@ class Registry:
"feedback": {},
"result": {},
"schema": self._generate_unilab_json_command_schema(
v["args"],
k,
v.get("return_annotation"),
# 传入旧的 schema 以保留字段 description
old_action_configs.get(f"auto-{k}", {}).get("schema"),
v["args"], k, v.get("return_annotation")
),
"goal_default": {i["name"]: i["default"] for i in v["args"]},
# 保留原有的 handles 配置
"handles": old_action_configs.get(f"auto-{k}", {}).get("handles", []),
"handles": [],
"placeholder_keys": {
i["name"]: (
"unilabos_resources"
@@ -806,14 +746,12 @@ class Registry:
if k not in device_config["class"]["action_value_mappings"]
}
)
# 恢复原有的 description 信息(auto- 开头的动作
for action_name, old_config in old_action_configs.items():
# 恢复原有的description信息auto开头的不修改
for action_name, description in old_descriptions.items():
if action_name in device_config["class"]["action_value_mappings"]: # 有一些会被删除
old_schema = old_config.get("schema", {})
if "description" in old_schema and old_schema["description"]:
device_config["class"]["action_value_mappings"][action_name]["schema"][
"description"
] = old_schema["description"]
device_config["class"]["action_value_mappings"][action_name]["schema"][
"description"
] = description
device_config["init_param_schema"] = {}
device_config["init_param_schema"]["config"] = self._generate_unilab_json_command_schema(
enhanced_info["init_params"], "__init__"

View File

@@ -770,16 +770,13 @@ def ros_message_to_json_schema(msg_class: Any, field_name: str) -> Dict[str, Any
return schema
def ros_action_to_json_schema(
action_class: Any, description="", previous_schema: Optional[Dict[str, Any]] = None
) -> Dict[str, Any]:
def ros_action_to_json_schema(action_class: Any, description="") -> Dict[str, Any]:
"""
将 ROS Action 类转换为 JSON Schema
Args:
action_class: ROS Action 类
description: 描述
previous_schema: 之前的 schema用于保留 goal/feedback/result 下一级字段的 description
Returns:
完整的 JSON Schema 定义
@@ -813,44 +810,9 @@ def ros_action_to_json_schema(
"required": ["goal"],
}
# 保留之前 schema 中 goal/feedback/result 下一级字段的 description
if previous_schema:
_preserve_field_descriptions(schema, previous_schema)
return schema
def _preserve_field_descriptions(
new_schema: Dict[str, Any], previous_schema: Dict[str, Any]
) -> None:
"""
保留之前 schema 中 goal/feedback/result 下一级字段的 description 和 title
Args:
new_schema: 新生成的 schema会被修改
previous_schema: 之前的 schema
"""
for section in ["goal", "feedback", "result"]:
new_section = new_schema.get("properties", {}).get(section, {})
prev_section = previous_schema.get("properties", {}).get(section, {})
if not new_section or not prev_section:
continue
new_props = new_section.get("properties", {})
prev_props = prev_section.get("properties", {})
for field_name, field_schema in new_props.items():
if field_name in prev_props:
prev_field = prev_props[field_name]
# 保留字段的 description
if "description" in prev_field and prev_field["description"]:
field_schema["description"] = prev_field["description"]
# 保留字段的 title用户自定义的中文名
if "title" in prev_field and prev_field["title"]:
field_schema["title"] = prev_field["title"]
def convert_ros_action_to_jsonschema(
action_name_or_type: Union[str, Type], output_file: Optional[str] = None, format: str = "json"
) -> Dict[str, Any]:

View File

@@ -49,6 +49,7 @@ from unilabos.resources.resource_tracker import (
ResourceTreeInstance,
ResourceDictInstance,
)
from unilabos.ros.x.rclpyx import get_event_loop
from unilabos.ros.utils.driver_creator import WorkstationNodeCreator, PyLabRobotCreator, DeviceClassCreator
from rclpy.task import Task, Future
from unilabos.utils.import_manager import default_manager
@@ -184,7 +185,7 @@ class PropertyPublisher:
f"创建发布者 {name} 失败,可能由于注册表有误,类型: {msg_type},错误: {ex}\n{traceback.format_exc()}"
)
self.timer = node.create_timer(self.timer_period, self.publish_property)
self.__loop = ROS2DeviceNode.get_asyncio_loop()
self.__loop = get_event_loop()
str_msg_type = str(msg_type)[8:-2]
self.node.lab_logger().trace(f"发布属性: {name}, 类型: {str_msg_type}, 周期: {initial_period}秒, QoS: {qos}")
@@ -391,12 +392,9 @@ class BaseROS2DeviceNode(Node, Generic[T]):
parent_resource = self.resource_tracker.figure_resource(
{"name": bind_parent_id}
)
for r in rts.root_nodes:
# noinspection PyUnresolvedReferences
r.res_content.parent_uuid = parent_resource.unilabos_uuid
else:
for r in rts.root_nodes:
r.res_content.parent_uuid = self.uuid
for r in rts.root_nodes:
# noinspection PyUnresolvedReferences
r.res_content.parent_uuid = parent_resource.unilabos_uuid
if len(LIQUID_INPUT_SLOT) and LIQUID_INPUT_SLOT[0] == -1 and len(rts.root_nodes) == 1 and isinstance(rts.root_nodes[0], RegularContainer):
# noinspection PyTypeChecker
@@ -1756,15 +1754,6 @@ class ROS2DeviceNode:
它不继承设备类,而是通过代理模式访问设备类的属性和方法。
"""
# 类变量,用于循环管理
_asyncio_loop = None
_asyncio_loop_running = False
_asyncio_loop_thread = None
@classmethod
def get_asyncio_loop(cls):
return cls._asyncio_loop
@staticmethod
async def safe_task_wrapper(trace_callback, func, **kwargs):
try:
@@ -1841,11 +1830,6 @@ class ROS2DeviceNode:
print_publish: 是否打印发布信息
driver_is_ros:
"""
# 在初始化时检查循环状态
if ROS2DeviceNode._asyncio_loop_running and ROS2DeviceNode._asyncio_loop_thread is not None:
pass
elif ROS2DeviceNode._asyncio_loop_thread is None:
self._start_loop()
# 保存设备类是否支持异步上下文
self._has_async_context = hasattr(driver_class, "__aenter__") and hasattr(driver_class, "__aexit__")
@@ -1937,17 +1921,6 @@ class ROS2DeviceNode:
except Exception as e:
self._ros_node.lab_logger().error(f"设备后初始化失败: {e}")
def _start_loop(self):
def run_event_loop():
loop = asyncio.new_event_loop()
ROS2DeviceNode._asyncio_loop = loop
asyncio.set_event_loop(loop)
loop.run_forever()
ROS2DeviceNode._asyncio_loop_thread = threading.Thread(target=run_event_loop, daemon=True, name="ROS2DeviceNode")
ROS2DeviceNode._asyncio_loop_thread.start()
logger.info(f"循环线程已启动")
class DeviceInfoType(TypedDict):
id: str

View File

@@ -5,8 +5,7 @@ import threading
import time
import traceback
import uuid
from typing import TYPE_CHECKING, Optional, Dict, Any, List, ClassVar, Set, Union
from typing_extensions import TypedDict
from typing import TYPE_CHECKING, Optional, Dict, Any, List, ClassVar, Set, TypedDict, Union
from action_msgs.msg import GoalStatus
from geometry_msgs.msg import Point
@@ -63,18 +62,6 @@ class TestResourceReturn(TypedDict):
devices: List[DeviceSlot]
class TestLatencyReturn(TypedDict):
"""test_latency方法的返回值类型"""
avg_rtt_ms: float
avg_time_diff_ms: float
max_time_error_ms: float
task_delay_ms: float
raw_delay_ms: float
test_count: int
status: str
class HostNode(BaseROS2DeviceNode):
"""
主机节点类,负责管理设备、资源和控制器
@@ -866,13 +853,8 @@ class HostNode(BaseROS2DeviceNode):
# 适配后端的一些额外处理
return_value = return_info.get("return_value")
if isinstance(return_value, dict):
unilabos_samples = return_value.pop("unilabos_samples", None)
if isinstance(unilabos_samples, list) and unilabos_samples:
self.lab_logger().info(
f"[Host Node] Job {job_id[:8]} returned {len(unilabos_samples)} sample(s): "
f"{[s.get('name', s.get('id', 'unknown')) if isinstance(s, dict) else str(s)[:20] for s in unilabos_samples[:5]]}"
f"{'...' if len(unilabos_samples) > 5 else ''}"
)
unilabos_samples = return_info.get("unilabos_samples")
if isinstance(unilabos_samples, list):
return_info["unilabos_samples"] = unilabos_samples
suc = return_info.get("suc", False)
if not suc:
@@ -899,7 +881,7 @@ class HostNode(BaseROS2DeviceNode):
# 清理 _goals 中的记录
if job_id in self._goals:
del self._goals[job_id]
self.lab_logger().trace(f"[Host Node] Removed goal {job_id[:8]} from _goals")
self.lab_logger().debug(f"[Host Node] Removed goal {job_id[:8]} from _goals")
# 存储结果供 HTTP API 查询
try:
@@ -1344,20 +1326,10 @@ class HostNode(BaseROS2DeviceNode):
self.lab_logger().debug(f"[Host Node-Resource] List parameters: {request}")
return response
def test_latency(self) -> TestLatencyReturn:
def test_latency(self):
"""
测试网络延迟的action实现
通过5次ping-pong机制校对时间误差并计算实际延迟
Returns:
TestLatencyReturn: 包含延迟测试结果的字典,包括:
- avg_rtt_ms: 平均往返时间(毫秒)
- avg_time_diff_ms: 平均时间差(毫秒)
- max_time_error_ms: 最大时间误差(毫秒)
- task_delay_ms: 实际任务延迟(毫秒),-1表示无法计算
- raw_delay_ms: 原始时间差(毫秒),-1表示无法计算
- test_count: 有效测试次数
- status: 测试状态,"success"表示成功,"all_timeout"表示全部超时
"""
import uuid as uuid_module
@@ -1420,15 +1392,7 @@ class HostNode(BaseROS2DeviceNode):
if not ping_results:
self.lab_logger().error("❌ 所有ping-pong测试都失败了")
return {
"avg_rtt_ms": -1.0,
"avg_time_diff_ms": -1.0,
"max_time_error_ms": -1.0,
"task_delay_ms": -1.0,
"raw_delay_ms": -1.0,
"test_count": 0,
"status": "all_timeout",
}
return {"status": "all_timeout"}
# 统计分析
rtts = [r["rtt_ms"] for r in ping_results]
@@ -1436,7 +1400,7 @@ class HostNode(BaseROS2DeviceNode):
avg_rtt_ms = sum(rtts) / len(rtts)
avg_time_diff_ms = sum(time_diffs) / len(time_diffs)
max_time_diff_error_ms: float = max(abs(min(time_diffs)), abs(max(time_diffs)))
max_time_diff_error_ms = max(abs(min(time_diffs)), abs(max(time_diffs)))
self.lab_logger().info("-" * 50)
self.lab_logger().info("[测试统计]")
@@ -1476,7 +1440,7 @@ class HostNode(BaseROS2DeviceNode):
self.lab_logger().info("=" * 60)
res: TestLatencyReturn = {
return {
"avg_rtt_ms": avg_rtt_ms,
"avg_time_diff_ms": avg_time_diff_ms,
"max_time_error_ms": max_time_diff_error_ms,
@@ -1487,14 +1451,9 @@ class HostNode(BaseROS2DeviceNode):
"test_count": len(ping_results),
"status": "success",
}
return res
def test_resource(
self,
resource: ResourceSlot = None,
resources: List[ResourceSlot] = None,
device: DeviceSlot = None,
devices: List[DeviceSlot] = None,
self, resource: ResourceSlot = None, resources: List[ResourceSlot] = None, device: DeviceSlot = None, devices: List[DeviceSlot] = None
) -> TestResourceReturn:
if resources is None:
resources = []
@@ -1555,9 +1514,7 @@ class HostNode(BaseROS2DeviceNode):
# 构建服务地址
srv_address = f"/srv{namespace}/s2c_resource_tree"
self.lab_logger().trace(
f"[Host Node-Resource] Host -> {device_id} ResourceTree {action} operation started -------"
)
self.lab_logger().trace(f"[Host Node-Resource] Host -> {device_id} ResourceTree {action} operation started -------")
# 创建服务客户端
sclient = self.create_client(SerialCommand, srv_address)
@@ -1592,9 +1549,7 @@ class HostNode(BaseROS2DeviceNode):
time.sleep(0.05)
response = future.result()
self.lab_logger().trace(
f"[Host Node-Resource] Host -> {device_id} ResourceTree {action} operation completed -------"
)
self.lab_logger().trace(f"[Host Node-Resource] Host -> {device_id} ResourceTree {action} operation completed -------")
return True
except Exception as e:

View File

182
unilabos/ros/x/rclpyx.py Normal file
View File

@@ -0,0 +1,182 @@
import asyncio
from asyncio import events
import threading
import rclpy
from rclpy.impl.implementation_singleton import rclpy_implementation as _rclpy
from rclpy.executors import await_or_execute, Executor
from rclpy.action import ActionClient, ActionServer
from rclpy.action.server import ServerGoalHandle, GoalResponse, GoalInfo, GoalStatus
from std_msgs.msg import String
from action_tutorials_interfaces.action import Fibonacci
loop = None
def get_event_loop():
global loop
return loop
async def default_handle_accepted_callback_async(goal_handle):
"""Execute the goal."""
await goal_handle.execute()
class ServerGoalHandleX(ServerGoalHandle):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
async def execute(self, execute_callback=None):
# It's possible that there has been a request to cancel the goal prior to executing.
# In this case we want to avoid the illegal state transition to EXECUTING
# but still call the users execute callback to let them handle canceling the goal.
if not self.is_cancel_requested:
self._update_state(_rclpy.GoalEvent.EXECUTE)
await self._action_server.notify_execute_async(self, execute_callback)
class ActionServerX(ActionServer):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.register_handle_accepted_callback(default_handle_accepted_callback_async)
async def _execute_goal_request(self, request_header_and_message):
request_header, goal_request = request_header_and_message
goal_uuid = goal_request.goal_id
goal_info = GoalInfo()
goal_info.goal_id = goal_uuid
self._node.get_logger().debug('New goal request with ID: {0}'.format(goal_uuid.uuid))
# Check if goal ID is already being tracked by this action server
with self._lock:
goal_id_exists = self._handle.goal_exists(goal_info)
accepted = False
if not goal_id_exists:
# Call user goal callback
response = await await_or_execute(self._goal_callback, goal_request.goal)
if not isinstance(response, GoalResponse):
self._node.get_logger().warning(
'Goal request callback did not return a GoalResponse type. Rejecting goal.')
else:
accepted = GoalResponse.ACCEPT == response
if accepted:
# Stamp time of acceptance
goal_info.stamp = self._node.get_clock().now().to_msg()
# Create a goal handle
try:
with self._lock:
goal_handle = ServerGoalHandleX(self, goal_info, goal_request.goal)
except RuntimeError as e:
self._node.get_logger().error(
'Failed to accept new goal with ID {0}: {1}'.format(goal_uuid.uuid, e))
accepted = False
else:
self._goal_handles[bytes(goal_uuid.uuid)] = goal_handle
# Send response
response_msg = self._action_type.Impl.SendGoalService.Response()
response_msg.accepted = accepted
response_msg.stamp = goal_info.stamp
self._handle.send_goal_response(request_header, response_msg)
if not accepted:
self._node.get_logger().debug('New goal rejected: {0}'.format(goal_uuid.uuid))
return
self._node.get_logger().debug('New goal accepted: {0}'.format(goal_uuid.uuid))
# Provide the user a reference to the goal handle
# await await_or_execute(self._handle_accepted_callback, goal_handle)
asyncio.create_task(self._handle_accepted_callback(goal_handle))
async def notify_execute_async(self, goal_handle, execute_callback):
# Use provided callback, defaulting to a previously registered callback
if execute_callback is None:
if self._execute_callback is None:
return
execute_callback = self._execute_callback
# Schedule user callback for execution
self._node.get_logger().info(f"{events.get_running_loop()}")
asyncio.create_task(self._execute_goal(execute_callback, goal_handle))
# loop = asyncio.new_event_loop()
# asyncio.set_event_loop(loop)
# task = loop.create_task(self._execute_goal(execute_callback, goal_handle))
# await task
class ActionClientX(ActionClient):
feedback_queue = asyncio.Queue()
async def feedback_cb(self, msg):
await self.feedback_queue.put(msg)
async def send_goal_async(self, goal_msg):
goal_future = super().send_goal_async(
goal_msg,
feedback_callback=self.feedback_cb
)
client_goal_handle = await asyncio.ensure_future(goal_future)
if not client_goal_handle.accepted:
raise Exception("Goal rejected.")
result_future = client_goal_handle.get_result_async()
while True:
feedback_future = asyncio.ensure_future(self.feedback_queue.get())
tasks = [result_future, feedback_future]
await asyncio.wait(tasks, return_when=asyncio.FIRST_COMPLETED)
if result_future.done():
result = result_future.result().result
yield (None, result)
break
else:
feedback = feedback_future.result().feedback
yield (feedback, None)
async def main(node):
print('Node started.')
action_client = ActionClientX(node, Fibonacci, 'fibonacci')
goal_msg = Fibonacci.Goal()
goal_msg.order = 10
async for (feedback, result) in action_client.send_goal_async(goal_msg):
if feedback:
print(f'Feedback: {feedback}')
else:
print(f'Result: {result}')
print('Finished.')
async def ros_loop_node(node):
while rclpy.ok():
rclpy.spin_once(node, timeout_sec=0)
await asyncio.sleep(1e-4)
async def ros_loop(executor: Executor):
while rclpy.ok():
executor.spin_once(timeout_sec=0)
await asyncio.sleep(1e-4)
def run_event_loop():
global loop
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
loop.run_forever()
def run_event_loop_in_thread():
thread = threading.Thread(target=run_event_loop, args=())
thread.start()
if __name__ == "__main__":
rclpy.init()
node = rclpy.create_node('async_subscriber')
future = asyncio.wait([ros_loop(node), main()])
asyncio.get_event_loop().run_until_complete(future)

View File

@@ -1,28 +0,0 @@
{
"nodes": [
{
"id": "workbench_1",
"name": "虚拟工作台",
"children": [],
"parent": null,
"type": "device",
"class": "virtual_workbench",
"position": {
"x": 400,
"y": 300,
"z": 0
},
"config": {
"arm_operation_time": 3.0,
"heating_time": 10.0,
"num_heating_stations": 3
},
"data": {
"status": "Ready",
"arm_state": "idle",
"message": "工作台就绪"
}
}
],
"links": []
}

View File

@@ -0,0 +1,187 @@
# UniLabOS 日志配置说明
> **文件位置**: `unilabos/utils/log.py`
> **最后更新**: 2026-01-11
> **维护者**: Uni-Lab-OS 开发团队
本文档说明 UniLabOS 日志系统中对第三方库和内部模块的日志级别配置,避免控制台被过多的 DEBUG 日志淹没。
---
## 📋 已屏蔽的日志
以下库/模块的日志已被设置为 **WARNING****INFO** 级别,不再显示 DEBUG 日志:
### 1. pymodbusModbus 通信库)
**配置位置**: `log.py` 第196-200行
```python
# pymodbus 库的日志太详细,设置为 WARNING
logging.getLogger('pymodbus').setLevel(logging.WARNING)
logging.getLogger('pymodbus.logging').setLevel(logging.WARNING)
logging.getLogger('pymodbus.logging.base').setLevel(logging.WARNING)
logging.getLogger('pymodbus.logging.decoders').setLevel(logging.WARNING)
```
**屏蔽原因**:
- pymodbus 在 DEBUG 级别会输出每一次 Modbus 通信的详细信息
- 包括 `Processing: 0x5 0x1e 0x0 0x0...` 等原始数据
- 包括 `decoded PDU function_code(3 sub -1) -> ReadHoldingRegistersResponse(...)` 等解码信息
- 这些信息对日常使用价值不大,但会快速刷屏
**典型被屏蔽的日志**:
```
[DEBUG] Processing: 0x5 0x1e 0x0 0x0 0x0 0x7 0x1 0x3 0x4 0x0 0x0 0x0 0x0 [handleFrame:72] [pymodbus.logging.base]
[DEBUG] decoded PDU function_code(3 sub -1) -> ReadHoldingRegistersResponse(...) [decode:79] [pymodbus.logging.decoders]
```
---
### 2. websocketsWebSocket 库)
**配置位置**: `log.py` 第202-205行
```python
# websockets 库的日志输出较多,设置为 WARNING
logging.getLogger('websockets').setLevel(logging.WARNING)
logging.getLogger('websockets.client').setLevel(logging.WARNING)
logging.getLogger('websockets.server').setLevel(logging.WARNING)
```
**屏蔽原因**:
- WebSocket 连接、断开、心跳等信息在 DEBUG 级别会频繁输出
- 对于长时间运行的服务,这些日志意义不大
---
### 3. ROS Host Node设备状态更新
**配置位置**: `log.py` 第207-208行
```python
# ROS 节点的状态更新日志过于频繁,设置为 INFO
logging.getLogger('unilabos.ros.nodes.presets.host_node').setLevel(logging.INFO)
```
**屏蔽原因**:
- 设备状态更新(如手套箱压力)每隔几秒就会更新一次
- DEBUG 日志会记录每一次状态变化,导致日志刷屏
- 这些频繁的状态更新对调试价值不大
**典型被屏蔽的日志**:
```
[DEBUG] [/devices/host_node] Status updated: BatteryStation.data_glove_box_pressure = 4.229457855224609 [property_callback:666] [unilabos.ros.nodes.presets.host_node]
```
---
### 4. asyncio 和 urllib3
**配置位置**: `log.py` 第224-225行
```python
logging.getLogger("asyncio").setLevel(logging.INFO)
logging.getLogger("urllib3").setLevel(logging.INFO)
```
**屏蔽原因**:
- asyncio: 异步 IO 的内部调试信息
- urllib3: HTTP 请求库的连接池、重试等详细信息
---
## 🔧 如何临时启用这些日志(调试用)
### 方法1: 修改 log.py永久启用
`log.py``configure_logger()` 函数中,将对应库的日志级别改为 `logging.DEBUG`:
```python
# 临时启用 pymodbus 的 DEBUG 日志
logging.getLogger('pymodbus').setLevel(logging.DEBUG)
logging.getLogger('pymodbus.logging').setLevel(logging.DEBUG)
logging.getLogger('pymodbus.logging.base').setLevel(logging.DEBUG)
logging.getLogger('pymodbus.logging.decoders').setLevel(logging.DEBUG)
```
### 方法2: 在代码中临时启用(单次调试)
在需要调试的代码文件中添加:
```python
import logging
# 临时启用 pymodbus DEBUG 日志
logging.getLogger('pymodbus').setLevel(logging.DEBUG)
# 你的 Modbus 调试代码
...
# 调试完成后恢复
logging.getLogger('pymodbus').setLevel(logging.WARNING)
```
### 方法3: 使用环境变量或配置文件(推荐)
未来可以考虑在启动参数中添加 `--debug-modbus` 等选项来动态控制。
---
## 📊 日志级别说明
| 级别 | 数值 | 用途 | 是否显示 |
|------|------|------|---------|
| TRACE | 5 | 最详细的跟踪信息 | ✅ |
| DEBUG | 10 | 调试信息 | ✅ |
| INFO | 20 | 一般信息 | ✅ |
| WARNING | 30 | 警告信息 | ✅ |
| ERROR | 40 | 错误信息 | ✅ |
| CRITICAL | 50 | 严重错误 | ✅ |
**当前配置**:
- UniLabOS 自身代码: DEBUG 及以上全部显示
- pymodbus/websockets: **WARNING** 及以上显示(屏蔽 DEBUG/INFO
- ROS host_node: **INFO** 及以上显示(屏蔽 DEBUG
---
## ⚠️ 重要提示
### 修改生效时间
- 修改 `log.py` 后需要 **重启 unilab 服务** 才能生效
- 不需要重新安装或重新编译
### 调试 Modbus 通信问题
如果需要调试 Modbus 通信故障,应该:
1. 临时启用 pymodbus DEBUG 日志方法2
2. 复现问题
3. 查看详细的通信日志
4. 调试完成后记得恢复 WARNING 级别
### 调试设备状态问题
如果需要调试设备状态更新问题:
```python
logging.getLogger('unilabos.ros.nodes.presets.host_node').setLevel(logging.DEBUG)
```
---
## 📝 维护记录
| 日期 | 修改内容 | 操作人 |
|------|---------|--------|
| 2026-01-11 | 初始创建,添加 pymodbus、websockets、ROS host_node 屏蔽 | - |
| 2026-01-07 | 添加 pymodbus 和 websockets 屏蔽log-0107.py | - |
---
## 🔗 相关文件
- `log.py` - 日志配置主文件
- `unilabos/devices/workstation/coin_cell_assembly/` - 使用 Modbus 的扣电工作站代码
- `unilabos/ros/nodes/presets/host_node.py` - ROS 主机节点代码
---
**维护提示**: 如果添加了新的第三方库或发现新的日志刷屏问题,请在此文档中记录并更新 `log.py` 配置。

View File

@@ -182,49 +182,3 @@ def get_all_subscriptions(instance) -> list:
except Exception:
pass
return subscriptions
def not_action(func: F) -> F:
"""
标记方法为非动作的装饰器
用于装饰 driver 类中的方法,使其在 complete_registry 时不被识别为动作。
适用于辅助方法、内部工具方法等不应暴露为设备动作的公共方法。
Example:
class MyDriver:
@not_action
def helper_method(self):
# 这个方法不会被注册为动作
pass
def actual_action(self, param: str):
# 这个方法会被注册为动作
self.helper_method()
Note:
- 可以与其他装饰器组合使用,@not_action 应放在最外层
- 仅影响 complete_registry 的动作识别,不影响方法的正常调用
"""
@wraps(func)
def wrapper(*args, **kwargs):
return func(*args, **kwargs)
# 在函数上附加标记
wrapper._is_not_action = True # type: ignore[attr-defined]
return wrapper # type: ignore[return-value]
def is_not_action(func) -> bool:
"""
检查函数是否被标记为非动作
Args:
func: 被检查的函数
Returns:
如果函数被 @not_action 装饰则返回 True否则返回 False
"""
return getattr(func, "_is_not_action", False)

View File

@@ -24,7 +24,6 @@ class EnvironmentChecker:
"msgcenterpy": "msgcenterpy",
"opentrons_shared_data": "opentrons_shared_data",
"typing_extensions": "typing_extensions",
"crcmod": "crcmod-plus",
}
# 特殊安装包(需要特殊处理的包)

View File

@@ -28,7 +28,6 @@ __all__ = [
from ast import Constant
from unilabos.utils import logger
from unilabos.utils.decorator import is_not_action
class ImportManager:
@@ -276,9 +275,6 @@ class ImportManager:
method_info = self._analyze_method_signature(method)
result["status_methods"][actual_name] = method_info
elif not name.startswith("_"):
# 检查是否被 @not_action 装饰器标记
if is_not_action(method):
continue
# 其他非_开头的方法归类为action
method_info = self._analyze_method_signature(method)
result["action_methods"][name] = method_info
@@ -334,9 +330,6 @@ class ImportManager:
if actual_name not in result["status_methods"]:
result["status_methods"][actual_name] = method_info
else:
# 检查是否被 @not_action 装饰器标记
if self._is_not_action_method(node):
continue
# 其他非_开头的方法归类为action
result["action_methods"][method_name] = method_info
return result
@@ -457,13 +450,6 @@ class ImportManager:
return True
return False
def _is_not_action_method(self, node: ast.FunctionDef) -> bool:
"""检查是否是@not_action装饰的方法"""
for decorator in node.decorator_list:
if isinstance(decorator, ast.Name) and decorator.id == "not_action":
return True
return False
def _get_property_name_from_setter(self, node: ast.FunctionDef) -> str:
"""从setter装饰器中获取属性名"""
for decorator in node.decorator_list:

View File

@@ -191,6 +191,21 @@ def configure_logger(loglevel=None, working_dir=None):
# 添加处理器到根日志记录器
root_logger.addHandler(console_handler)
# 降低第三方库的日志级别,避免过多输出
# pymodbus 库的日志太详细,设置为 WARNING
logging.getLogger('pymodbus').setLevel(logging.WARNING)
logging.getLogger('pymodbus.logging').setLevel(logging.WARNING)
logging.getLogger('pymodbus.logging.base').setLevel(logging.WARNING)
logging.getLogger('pymodbus.logging.decoders').setLevel(logging.WARNING)
# websockets 库的日志输出较多,设置为 WARNING
logging.getLogger('websockets').setLevel(logging.WARNING)
logging.getLogger('websockets.client').setLevel(logging.WARNING)
logging.getLogger('websockets.server').setLevel(logging.WARNING)
# ROS 节点的状态更新日志过于频繁,设置为 INFO
logging.getLogger('unilabos.ros.nodes.presets.host_node').setLevel(logging.INFO)
# 如果指定了工作目录,添加文件处理器
if working_dir is not None:

View File

@@ -1,11 +1,7 @@
import psutil
import pywinauto
try:
from pywinauto_recorder import UIApplication
from pywinauto_recorder.player import UIPath, click, focus_on_application, exists, find, get_wrapper_path
except ImportError:
print("未安装pywinauto_recorder部分功能无法使用安装时注意enum")
pass
from pywinauto_recorder import UIApplication
from pywinauto_recorder.player import UIPath, click, focus_on_application, exists, find, get_wrapper_path
from pywinauto.controls.uiawrapper import UIAWrapper
from pywinauto.application import WindowSpecification
from pywinauto import findbestmatch

View File

@@ -1,18 +0,0 @@
networkx
typing_extensions
websockets
msgcenterpy>=0.1.5
opentrons_shared_data
pint
fastapi
jinja2
requests
uvicorn
pyautogui
opcua
pyserial
pandas
crcmod-plus
pymodbus
matplotlib
pylibftdi

View File

@@ -2,7 +2,7 @@
<?xml-model href="http://download.ros.org/schema/package_format3.xsd" schematypens="http://www.w3.org/2001/XMLSchema"?>
<package format="3">
<name>unilabos_msgs</name>
<version>0.10.16</version>
<version>0.10.15</version>
<description>ROS2 Messages package for unilabos devices</description>
<maintainer email="changjh@pku.edu.cn">Junhan Chang</maintainer>
<maintainer email="18435084+Xuwznln@users.noreply.github.com">Xuwznln</maintainer>