Compare commits

..

62 Commits

Author SHA1 Message Date
Xuwznln
9feeb0c430 Fix Build 9 2026-01-27 15:51:40 +08:00
Xuwznln
b2f26ffb28 Fix Build 8 2026-01-27 15:39:15 +08:00
dependabot[bot]
4b0d1553e9 ci(deps): bump actions/checkout from 4 to 6 (#223)
Bumps [actions/checkout](https://github.com/actions/checkout) from 4 to 6.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v4...v6)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-version: '6'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-27 15:30:47 +08:00
dependabot[bot]
67ddee2ab2 ci(deps): bump actions/upload-pages-artifact from 3 to 4 (#225)
Bumps [actions/upload-pages-artifact](https://github.com/actions/upload-pages-artifact) from 3 to 4.
- [Release notes](https://github.com/actions/upload-pages-artifact/releases)
- [Commits](https://github.com/actions/upload-pages-artifact/compare/v3...v4)

---
updated-dependencies:
- dependency-name: actions/upload-pages-artifact
  dependency-version: '4'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-27 15:30:38 +08:00
dependabot[bot]
1bcdad9448 ci(deps): bump actions/upload-artifact from 4 to 6 (#224)
Bumps [actions/upload-artifact](https://github.com/actions/upload-artifact) from 4 to 6.
- [Release notes](https://github.com/actions/upload-artifact/releases)
- [Commits](https://github.com/actions/upload-artifact/compare/v4...v6)

---
updated-dependencies:
- dependency-name: actions/upload-artifact
  dependency-version: '6'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-27 15:30:31 +08:00
dependabot[bot]
039c96fe01 ci(deps): bump actions/configure-pages from 4 to 5 (#222)
Bumps [actions/configure-pages](https://github.com/actions/configure-pages) from 4 to 5.
- [Release notes](https://github.com/actions/configure-pages/releases)
- [Commits](https://github.com/actions/configure-pages/compare/v4...v5)

---
updated-dependencies:
- dependency-name: actions/configure-pages
  dependency-version: '5'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-27 15:30:22 +08:00
Xuwznln
e1555d10a0 Fix Build 7 2026-01-27 15:14:31 +08:00
Xuwznln
f2a96b2041 Fix Build 6 2026-01-27 14:36:35 +08:00
Xuwznln
329349639e Fix Build 5 2026-01-27 14:25:34 +08:00
Xuwznln
e4cc111523 Fix Build 4 2026-01-27 14:19:56 +08:00
Xuwznln
d245ceef1b Fix Build 3 2026-01-27 14:15:16 +08:00
Xuwznln
6db7fbd721 Fix Build 2 2026-01-27 13:45:32 +08:00
Xuwznln
ab05b858e1 Fix Build 1 2026-01-27 13:35:35 +08:00
Xuwznln
43e4c71a8e Update to ROS2 Humble 0.7 2026-01-27 13:31:24 +08:00
Xuwznln
2cf58ca452 Upgrade to py 3.11.14; ros 0.7; unilabos 0.10.16 2026-01-26 16:47:54 +08:00
Xuwznln
fd73bb7dcb CI Check Fix 5 2026-01-26 08:47:27 +08:00
Xuwznln
a02cecfd18 CI Check Fix 4 2026-01-26 08:20:17 +08:00
Xuwznln
d6accc3f1c CI Check Fix 3 2026-01-26 08:14:21 +08:00
Xuwznln
39dc443399 CI Check Fix 2 2026-01-26 02:23:40 +08:00
Xuwznln
37b1fca962 CI Check Fix 1 2026-01-26 02:22:21 +08:00
Xuwznln
216f19fb62 Workbench example, adjust log level, and ci check (#220)
* TestLatency Return Value Example & gitignore update

* Adjust log level & Add workbench virtual example & Add not action decorator & Add check_mode &

* Add CI Check
2026-01-26 02:15:13 +08:00
Xuwznln
ec7ca6a1fe Fix/workstation yb revision (#217)
* Revert log change & update registry

* Revert opcua client & move electrolyte node
2026-01-17 16:50:20 +08:00
Xuwznln
4c8022ee95 Workstation yb merge dev ready 260113 (#216)
* feat(bioyond): 添加计算实验设计功能,支持化合物配比和滴定比例参数

* feat(bioyond): 添加测量小瓶功能,支持基本参数配置

* feat(bioyond): 添加测量小瓶配置,支持新设备参数

* feat(bioyond): 更新仓库布局和尺寸,支持竖向排列的测量小瓶和试剂存放堆栈

* feat(bioyond): 优化任务创建流程,确保无论成功与否都清理任务队列以避免重复累积

* feat(bioyond): 添加设置反应器温度功能,支持温度范围和异常处理

* feat(bioyond): 调整反应器位置配置,统一坐标格式

* feat(bioyond): 添加调度器启动功能,支持任务队列执行并处理异常

* feat(bioyond): 优化调度器启动功能,添加异常处理并更新相关配置

* feat(opcua): 增强节点ID解析兼容性和数据类型处理

改进节点ID解析逻辑以支持多种格式,包括字符串和数字标识符
添加数据类型转换处理,确保写入值时类型匹配
优化错误提示信息,便于调试节点连接问题

* feat(registry): 新增后处理站的设备配置文件

添加后处理站的YAML配置文件,包含动作映射、状态类型和设备描述

* 添加调度器启动功能,合并物料参数配置,优化物料参数处理逻辑

* 添加从 Bioyond 系统自动同步工作流序列的功能,并更新相关配置

* fix:兼容 BioyondReactionStation 中 workflow_sequence 被重写为 property

* fix:同步工作流序列

* feat: remove commented workflow synchronization from `reaction_station.py`.

* 添加时间约束功能及相关配置

* fix:自动更新物料缓存功能,添加物料时更新缓存并在删除时移除缓存项

* fix:在添加物料时处理字符串和字典返回值,确保正确更新缓存

* fix:更新奔曜错误处理报送为物料变更报送,调整日志记录和响应消息

* feat:添加实验报告简化功能,去除冗余信息并保留关键信息

* feat: 添加任务状态事件发布功能,监控并报告任务运行、超时、完成和错误状态

* fix: 修复添加物料时数据格式错误

* Refactor bioyond_dispensing_station and reaction_station_bioyond YAML configurations

- Removed redundant action value mappings from bioyond_dispensing_station.
- Updated goal properties in bioyond_dispensing_station to use enums for target_stack and other parameters.
- Changed data types for end_point and start_point in reaction_station_bioyond to use string enums (Start, End).
- Simplified descriptions and updated measurement units from μL to mL where applicable.
- Removed unused commands from reaction_station_bioyond to streamline the configuration.

* fix:Change the material unit from μL to mL

* fix:refresh_material_cache

* feat: 动态获取工作流步骤ID,优化工作流配置

* feat: 添加清空服务端所有非核心工作流功能

* fix:修复Bottle类的序列化和反序列化方法

* feat:增强材料缓存更新逻辑,支持处理返回数据中的详细信息

* Add debug log

* feat(workstation): update bioyond config migration and coin cell material search logic

- Migrate bioyond_cell config to JSON structure and remove global variable dependencies
- Implement material search confirmation dialog auto-handling
- Add documentation: 20260113_物料搜寻确认弹窗自动处理功能.md and 20260113_配置迁移修改总结.md

* Refactor module paths for Bioyond devices in YAML configuration files

- Updated the module path for BioyondDispensingStation in bioyond_dispensing_station.yaml to reflect the new directory structure.
- Updated the module path for BioyondReactionStation and BioyondReactor in reaction_station_bioyond.yaml to align with the revised organization of the codebase.

* fix: WareHouse 的不可哈希类型错误,优化父节点去重逻辑

* refactor: Move config from module to instance initialization

* fix: 修正 reaction_station 目录名拼写错误

* feat: Integrate material search logic and cleanup deprecated files

- Update coin_cell_assembly.py with material search dialog handling
- Update YB_warehouses.py with latest warehouse configurations
- Remove outdated documentation and test data files

* Refactor: Use instance attributes for action names and workflow step IDs

* refactor: Split tipbox storage into left and right warehouses

* refactor: Merge tipbox storage left and right into single warehouse

---------

Co-authored-by: ZiWei <131428629+ZiWei09@users.noreply.github.com>
Co-authored-by: Andy6M <xieqiming1132@qq.com>
2026-01-17 15:44:18 +08:00
ZiWei
ad21644db0 fix: WareHouse 的不可哈希类型错误,优化父节点去重逻辑 2026-01-14 20:15:05 +08:00
Xuwznln
9dfd58e9af fix parent_uuid fetch when bind_parent_id == node_name 2026-01-14 14:17:29 +08:00
Xuwznln
31c9f9a172 物料更新也是用父节点进行报送 2026-01-13 20:21:37 +08:00
Xuwznln
02cd8de4c5 Add None conversion for tube rack etc. 2026-01-13 17:49:11 +08:00
Xuwznln
a66603ec1c Add set_liquid example. 2026-01-12 22:24:01 +08:00
Xuwznln
ec015e16cd Add create_resource and test_resource example. 2026-01-12 21:17:28 +08:00
Xuwznln
965bf36e8d Add restart.
Temp allow action message.
2026-01-11 21:25:59 +08:00
Xuwznln
aacf3497e0 Add no_update_feedback option. 2026-01-09 17:18:39 +08:00
Xuwznln
657f952e7a Create session_id by edge. 2026-01-09 12:01:57 +08:00
Xuwznln
0165590290 bump version to 0.10.15 2026-01-08 15:37:49 +08:00
Xuwznln
daea1ab54d temp cancel update req 2026-01-08 15:26:31 +08:00
Xuwznln
93cb307396 Fix update with different spot and same parent 2026-01-08 03:46:00 +08:00
Xuwznln
1c312772ae Force update resource when adding new resource / transfer to another resource 2026-01-08 03:07:12 +08:00
Xuwznln
bad1db5094 location not passed to ItemizedCarrier when assign child resource 2026-01-08 03:07:11 +08:00
Xuwznln
f26eb69eca Fix size not pass through. 2026-01-08 03:07:11 +08:00
Xuwznln
12c0770c92 Fix build on macos-intel 2026-01-07 21:11:10 +08:00
Xuwznln
3d2d428a96 Update README.md
Modify resource_tracker file module path.

(cherry picked from commit 8066c200b9)
2026-01-07 20:54:43 +08:00
Xuwznln
78bf57f590 Bump version to 0.10.4 2026-01-07 20:41:23 +08:00
Xuwznln
e227cddab3 Update LICENSE 2026-01-07 20:40:02 +08:00
Xuwznln
f2b993643f Fix drag materials. 2026-01-07 19:40:29 +08:00
Xuwznln
2e14bf197c Fix and tested new create_resource. 2026-01-07 19:26:42 +08:00
Xuwznln
66c18c080a Update create_resource to resource tree mode. 2026-01-07 02:03:43 +08:00
Xuwznln
a1c34f138e Close #208. Fix mock devices.
(cherry picked from commit 28f93737ac)
2025-12-28 23:24:44 +08:00
Xianwei Qi
75bb5ec553 test_transfer_liquid_2 2025-12-26 16:42:50 +08:00
Xianwei Qi
bb95c89829 Merge branch 'dev' of https://github.com/dptech-corp/Uni-Lab-OS into dev 2025-12-26 16:25:19 +08:00
Xianwei Qi
394c140830 test_transfer_liquid 2025-12-26 16:24:55 +08:00
Xuwznln
e6d8d41183 bump version to 0.10.3 2025-12-26 03:26:50 +08:00
Xuwznln
847a300af3 update registry 2025-12-26 03:26:46 +08:00
Xuwznln
a201d7c307 update registry 2025-12-26 03:26:45 +08:00
Xuwznln
3433766bc5 do not modify globally 2025-12-26 03:26:44 +08:00
Xuwznln
7e9e93b29c Prcix9320 (#207)
* 0.10.7 Update (#101)

* Cleanup registry to be easy-understanding (#76)

* delete deprecated mock devices

* rename categories

* combine chromatographic devices

* rename rviz simulation nodes

* organic virtual devices

* parse vessel_id

* run registry completion before merge

---------

Co-authored-by: Xuwznln <18435084+Xuwznln@users.noreply.github.com>

* fix: workstation handlers and vessel_id parsing

* fix: working dir error when input config path
feat: report publish topic when error

* modify default discovery_interval to 15s

* feat: add trace log level

* feat: 添加ChinWe设备控制类,支持串口通信和电机控制功能 (#79)

* fix: drop_tips not using auto resource select

* fix: discard_tips error

* fix: discard_tips

* fix: prcxi_res

* add: prcxi res
fix: startup slow

* feat: workstation example

* fix pumps and liquid_handler handle

* feat: 优化protocol node节点运行日志

* fix all protocol_compilers and remove deprecated devices

* feat: 新增use_remote_resource参数

* fix and remove redundant info

* bugfixes on organic protocols

* fix filter protocol

* fix protocol node

* 临时兼容错误的driver写法

* fix: prcxi import error

* use call_async in all service to avoid deadlock

* fix: figure_resource

* Update recipe.yaml

* add workstation template and battery example

* feat: add sk & ak

* update workstation base

* Create workstation_architecture.md

* refactor: workstation_base 重构为仅含业务逻辑,通信和子设备管理交给 ProtocolNode

* refactor: ProtocolNode→WorkstationNode

* Add:msgs.action (#83)

* update: Workstation dev 将版本号从 0.10.3 更新为 0.10.4 (#84)

* Add:msgs.action

* update: 将版本号从 0.10.3 更新为 0.10.4

* simplify resource system

* uncompleted refactor

* example for use WorkstationBase

* feat: websocket

* feat: websocket test

* feat: workstation example

* feat: action status

* fix: station自己的方法注册错误

* fix: 还原protocol node处理方法

* fix: build

* fix: missing job_id key

* ws test version 1

* ws test version 2

* ws protocol

* 增加物料关系上传日志

* 增加物料关系上传日志

* 修正物料关系上传

* 修复工站的tracker实例追踪失效问题

* 增加handle检测,增加material edge关系上传

* 修复event loop错误

* 修复edge上报错误

* 修复async错误

* 更新schema的title字段

* 主机节点信息等支持自动刷新

* 注册表编辑器

* 修复status密集发送时,消息出错

* 增加addr参数

* fix: addr param

* fix: addr param

* 取消labid 和 强制config输入

* Add action definitions for LiquidHandlerSetGroup and LiquidHandlerTransferGroup

- Created LiquidHandlerSetGroup.action with fields for group name, wells, and volumes.
- Created LiquidHandlerTransferGroup.action with fields for source and target group names and unit volume.
- Both actions include response fields for return information and success status.

* Add LiquidHandlerSetGroup and LiquidHandlerTransferGroup actions to CMakeLists

* Add set_group and transfer_group methods to PRCXI9300Handler and update liquid_handler.yaml

* result_info改为字典类型

* 新增uat的地址替换

* runze multiple pump support

(cherry picked from commit 49354fcf39)

* remove runze multiple software obtainer

(cherry picked from commit 8bcc92a394)

* support multiple backbone

(cherry picked from commit 4771ff2347)

* Update runze pump format

* Correct runze multiple backbone

* Update runze_multiple_backbone

* Correct runze pump multiple receive method.

* Correct runze pump multiple receive method.

* 对于PRCXI9320的transfer_group,一对多和多对多

* 移除MQTT,更新launch文档,提供注册表示例文件,更新到0.10.5

* fix import error

* fix dupe upload registry

* refactor ws client

* add server timeout

* Fix: run-column with correct vessel id (#86)

* fix run_column

* Update run_column_protocol.py

(cherry picked from commit e5aa4d940a)

* resource_update use resource_add

* 新增版位推荐功能

* 重新规定了版位推荐的入参

* update registry with nested obj

* fix protocol node log_message, added create_resource return value

* fix protocol node log_message, added create_resource return value

* try fix add protocol

* fix resource_add

* 修复移液站错误的aspirate注册表

* Feature/xprbalance-zhida (#80)

* feat(devices): add Zhida GC/MS pretreatment automation workstation

* feat(devices): add mettler_toledo xpr balance

* balance

* 重新补全zhida注册表

* PRCXI9320 json

* PRCXI9320 json

* PRCXI9320 json

* fix resource download

* remove class for resource

* bump version to 0.10.6

* 更新所有注册表

* 修复protocolnode的兼容性

* 修复protocolnode的兼容性

* Update install md

* Add Defaultlayout

* 更新物料接口

* fix dict to tree/nested-dict converter

* coin_cell_station draft

* refactor: rename "station_resource" to "deck"

* add standardized BIOYOND resources: bottle_carrier, bottle

* refactor and add BIOYOND resources tests

* add BIOYOND deck assignment and pass all tests

* fix: update resource with correct structure; remove deprecated liquid_handler set_group action

* feat: 将新威电池测试系统驱动与配置文件并入 workstation_dev_YB2 (#92)

* feat: 新威电池测试系统驱动与注册文件

* feat: bring neware driver & battery.json into workstation_dev_YB2

* add bioyond studio draft

* bioyond station with communication init and resource sync

* fix bioyond station and registry

* fix: update resource with correct structure; remove deprecated liquid_handler set_group action

* frontend_docs

* create/update resources with POST/PUT for big amount/ small amount data

* create/update resources with POST/PUT for big amount/ small amount data

* refactor: add itemized_carrier instead of carrier consists of ResourceHolder

* create warehouse by factory func

* update bioyond launch json

* add child_size for itemized_carrier

* fix bioyond resource io

* Workstation templates: Resources and its CRUD, and workstation tasks (#95)

* coin_cell_station draft

* refactor: rename "station_resource" to "deck"

* add standardized BIOYOND resources: bottle_carrier, bottle

* refactor and add BIOYOND resources tests

* add BIOYOND deck assignment and pass all tests

* fix: update resource with correct structure; remove deprecated liquid_handler set_group action

* feat: 将新威电池测试系统驱动与配置文件并入 workstation_dev_YB2 (#92)

* feat: 新威电池测试系统驱动与注册文件

* feat: bring neware driver & battery.json into workstation_dev_YB2

* add bioyond studio draft

* bioyond station with communication init and resource sync

* fix bioyond station and registry

* create/update resources with POST/PUT for big amount/ small amount data

* refactor: add itemized_carrier instead of carrier consists of ResourceHolder

* create warehouse by factory func

* update bioyond launch json

* add child_size for itemized_carrier

* fix bioyond resource io

---------

Co-authored-by: h840473807 <47357934+h840473807@users.noreply.github.com>
Co-authored-by: Xie Qiming <97236197+Andy6M@users.noreply.github.com>

* 更新物料接口

* Workstation dev yb2 (#100)

* Refactor and extend reaction station action messages

* Refactor dispensing station tasks to enhance parameter clarity and add batch processing capabilities

- Updated `create_90_10_vial_feeding_task` to include detailed parameters for 90%/10% vial feeding, improving clarity and usability.
- Introduced `create_batch_90_10_vial_feeding_task` for batch processing of 90%/10% vial feeding tasks with JSON formatted input.
- Added `create_batch_diamine_solution_task` for batch preparation of diamine solution, also utilizing JSON formatted input.
- Refined `create_diamine_solution_task` to include additional parameters for better task configuration.
- Enhanced schema descriptions and default values for improved user guidance.

* 修复to_plr_resources

* add update remove

* 支持选择器注册表自动生成
支持转运物料

* 修复资源添加

* 修复transfer_resource_to_another生成

* 更新transfer_resource_to_another参数,支持spot入参

* 新增test_resource动作

* fix host_node error

* fix host_node test_resource error

* fix host_node test_resource error

* 过滤本地动作

* 移动内部action以兼容host node

* 修复同步任务报错不显示的bug

* feat: 允许返回非本节点物料,后面可以通过decoration进行区分,就不进行warning了

* update todo

* modify bioyond/plr converter, bioyond resource registry, and tests

* pass the tests

* update todo

* add conda-pack-build.yml

* add auto install script for conda-pack-build.yml

(cherry picked from commit 172599adcf)

* update conda-pack-build.yml

* update conda-pack-build.yml

* update conda-pack-build.yml

* update conda-pack-build.yml

* update conda-pack-build.yml

* Add version in __init__.py
Update conda-pack-build.yml
Add create_zip_archive.py

* Update conda-pack-build.yml

* Update conda-pack-build.yml (with mamba)

* Update conda-pack-build.yml

* Fix FileNotFoundError

* Try fix 'charmap' codec can't encode characters in position 16-23: character maps to <undefined>

* Fix unilabos msgs search error

* Fix environment_check.py

* Update recipe.yaml

* Update registry. Update uuid loop figure method. Update install docs.

* Fix nested conda pack

* Fix one-key installation path error

* Bump version to 0.10.7

* Workshop bj (#99)

* Add LaiYu Liquid device integration and tests

Introduce LaiYu Liquid device implementation, including backend, controllers, drivers, configuration, and resource files. Add hardware connection, tip pickup, and simplified test scripts, as well as experiment and registry configuration for LaiYu Liquid. Documentation and .gitignore for the device are also included.

* feat(LaiYu_Liquid): 重构设备模块结构并添加硬件文档

refactor: 重新组织LaiYu_Liquid模块目录结构
docs: 添加SOPA移液器和步进电机控制指令文档
fix: 修正设备配置中的最大体积默认值
test: 新增工作台配置测试用例
chore: 删除过时的测试脚本和配置文件

* add

* 重构: 将 LaiYu_Liquid.py 重命名为 laiyu_liquid_main.py 并更新所有导入引用

- 使用 git mv 将 LaiYu_Liquid.py 重命名为 laiyu_liquid_main.py
- 更新所有相关文件中的导入引用
- 保持代码功能不变,仅改善命名一致性
- 测试确认所有导入正常工作

* 修复: 在 core/__init__.py 中添加 LaiYuLiquidBackend 导出

- 添加 LaiYuLiquidBackend 到导入列表
- 添加 LaiYuLiquidBackend 到 __all__ 导出列表
- 确保所有主要类都可以正确导入

* 修复大小写文件夹名字

* 电池装配工站二次开发教程(带目录)上传至dev (#94)

* 电池装配工站二次开发教程

* Update intro.md

* 物料教程

* 更新物料教程,json格式注释

* Update prcxi driver & fix transfer_liquid mix_times (#90)

* Update prcxi driver & fix transfer_liquid mix_times

* fix: correct mix_times type

* Update liquid_handler registry

* test: prcxi.py

* Update registry from pr

* fix ony-key script not exist

* clean files

---------

Co-authored-by: Junhan Chang <changjh@dp.tech>
Co-authored-by: ZiWei <131428629+ZiWei09@users.noreply.github.com>
Co-authored-by: Guangxin Zhang <guangxin.zhang.bio@gmail.com>
Co-authored-by: Xie Qiming <97236197+Andy6M@users.noreply.github.com>
Co-authored-by: h840473807 <47357934+h840473807@users.noreply.github.com>
Co-authored-by: LccLink <1951855008@qq.com>
Co-authored-by: lixinyu1011 <61094742+lixinyu1011@users.noreply.github.com>
Co-authored-by: shiyubo0410 <shiyubo@dp.tech>

* fix startup env check.
add auto install during one-key installation

* Try fix one-key build on linux

* Complete all one key installation

* fix: rename schema field to resource_schema with serialization and validation aliases (#104)

Co-authored-by: ZiWei <131428629+ZiWei09@users.noreply.github.com>

* Fix one-key installation build

Install conda-pack before pack command

Add conda-pack to base when building one-key installer

Fix param error when using mamba run

Try fix one-key build on linux

* Fix conda pack on windows

* add plr_to_bioyond, and refactor bioyond stations

* modify default config

* Fix one-key installation build for windows

* Fix workstation startup
Update registry

* Fix/resource UUID and doc fix (#109)

* Fix ResourceTreeSet load error

* Raise error when using unsupported type to create ResourceTreeSet

* Fix children key error

* Fix children key error

* Fix workstation resource not tracking

* Fix workstation deck & children resource dupe

* Fix workstation deck & children resource dupe

* Fix multiple resource error

* Fix resource tree update

* Fix resource tree update

* Force confirm uuid

* Tip more error log

* Refactor Bioyond workstation and experiment workflow (#105)

Refactored the Bioyond workstation classes to improve parameter handling and workflow management. Updated experiment.py to use BioyondReactionStation with deck and material mappings, and enhanced workflow step parameter mapping and execution logic. Adjusted JSON experiment configs, improved workflow sequence handling, and added UUID assignment to PLR materials. Removed unused station_config and material cache logic, and added detailed docstrings and debug output for workflow methods.

* Fix resource get.
Fix resource parent not found.
Mapping uuid for all resources.

* mount parent uuid

* Add logging configuration based on BasicConfig in main function

* fix workstation node error

* fix workstation node error

* Update boot example

* temp fix for resource get

* temp fix for resource get

* provide error info when cant find plr type

* pack repo info

* fix to plr type error

* fix to plr type error

* Update regular container method

* support no size init

* fix comprehensive_station.json

* fix comprehensive_station.json

* fix type conversion

* fix state loading for regular container

* Update deploy-docs.yml

* Update deploy-docs.yml

---------

Co-authored-by: ZiWei <131428629+ZiWei09@users.noreply.github.com>

* Close #107
Update doc url.

* Fix/update resource (#112)

* cancel upload_registry

* Refactor Bioyond workstation and experiment workflow -fix (#111)

* refactor(bioyond_studio): 优化材料缓存加载和参数验证逻辑

改进材料缓存加载逻辑以支持多种材料类型和详细材料处理
更新工作流参数验证中的字段名从key/value改为Key/DisplayValue
移除未使用的merge_workflow_with_parameters方法
添加get_station_info方法获取工作站基础信息
清理实验文件中的注释代码和更新导入路径

* fix: 修复资源移除时的父资源检查问题

在BaseROS2DeviceNode中,移除资源前添加对父资源是否为None的检查,避免空指针异常
同时更新Bottle和BottleCarrier类以支持**kwargs参数
修正测试文件中Liquid_feeding_beaker的大小写拼写错误

* correct return message

---------

Co-authored-by: ZiWei <131428629+ZiWei09@users.noreply.github.com>

* fix resource_get in action

* fix(reaction_station): 清空工作流序列和参数避免重复执行 (#113)

在创建任务后清空工作流序列和参数,防止下次执行时累积重复

* Update create_resource device_id

* Update ResourceTracker

add more enumeration in POSE

fix converter in resource_tracker

* Update graphio together with workstation design.

fix(reaction_station): 为步骤参数添加Value字段传个BY后端

fix(bioyond/warehouses): 修正仓库尺寸和物品排列参数

调整仓库的x轴和z轴物品数量以及物品尺寸参数,使其符合4x1x4的规格要求

fix warehouse serialize/deserialize

fix bioyond converter

fix itemized_carrier.unassign_child_resource

allow not-loaded MSG in registry

add layout serializer & converter

warehouseuse A1-D4; add warehouse layout

fix(graphio): 修正bioyond到plr资源转换中的坐标计算错误

Fix resource assignment and type mapping issues

Corrects resource assignment in ItemizedCarrier by using the correct spot key from _ordering. Updates graphio to use 'typeName' instead of 'name' for type mapping in resource_bioyond_to_plr. Renames DummyWorkstation to BioyondWorkstation in workstation_http_service for clarity.

* Update workstation & bioyond example

Refine descriptions in Bioyond reaction station YAML

Updated and clarified field and operation descriptions in the reaction_station_bioyond.yaml file for improved accuracy and consistency. Changes include more precise terminology, clearer parameter explanations, and standardized formatting for operation schemas.

refactor(workstation): 更新反应站参数描述并添加分液站配置文件

修正反应站方法参数描述,使其更准确清晰
添加bioyond_dispensing_station.yaml配置文件

add create_workflow script and test

add invisible_slots to carriers

fix(warehouses): 修正bioyond_warehouse_1x4x4仓库的尺寸参数

调整仓库的num_items_x和num_items_z值以匹配实际布局,并更新物品尺寸参数

save resource get data. allow empty value for layout and cross_section_type

More decks&plates support for bioyond (#115)

refactor(registry): 重构反应站设备配置,简化并更新操作命令

移除旧的自动操作命令,新增针对具体化学操作的命令配置
更新模块路径和配置结构,优化参数定义和描述

fix(dispensing_station): 修正物料信息查询方法调用

将直接调用material_id_query改为通过hardware_interface调用,以符合接口设计规范

* PRCXI Update

修改prcxi连线

prcxi样例图

Create example_prcxi.json

* Update resource extra & uuid.

use ordering to convert identifier to idx

convert identifier to site idx

correct extra key

update extra before transfer

fix multiple instance error

add resource_tree_transfer func

fox itemrized carrier assign child resource

support internal device material transfer

remove extra key

use same callback group

support material extra

support material extra
support update_resource_site in extra

* Update workstation.

modify workstation_architecture docs

bioyond_HR (#133)

* feat: Enhance Bioyond synchronization and resource management

- Implemented synchronization for all material types (consumables, samples, reagents) from Bioyond, logging detailed information for each type.
- Improved error handling and logging during synchronization processes.
- Added functionality to save Bioyond material IDs in UniLab resources for future updates.
- Enhanced the `sync_to_external` method to handle material movements correctly, including querying and creating materials in Bioyond.
- Updated warehouse configurations to support new storage types and improved layout for better resource management.
- Introduced new resource types such as reactors and tip boxes, with detailed specifications.
- Modified warehouse factory to support column offsets for naming conventions (e.g., A05-D08).
- Improved resource tracking by merging extra attributes instead of overwriting them.
- Added a new method for updating resources in Bioyond, ensuring better synchronization of resource changes.

* feat: 添加TipBox和Reactor的配置到bottles.yaml

* fix: 修复液体投料方法中的volume参数处理逻辑

修复solid_feeding_vials方法中的volume参数处理逻辑,优化solvents参数的使用条件

更新液体投料方法,支持通过溶剂信息自动计算体积,添加solvents参数并更新文档描述

Add batch creation methods for vial and solution tasks

添加批量创建90%10%小瓶投料任务和二胺溶液配置任务的功能,更新相关参数和默认值

* 封膜仪、撕膜仪、耗材站接口

* 添加Raman和xrd相关代码

* Resource update & asyncio fix

correct bioyond config

prcxi example

fix append_resource

fix regularcontainer

fix cancel error

fix resource_get param

fix json dumps

support name change during materials change

enable slave mode

change uuid logger to trace level

correct remove_resource stats

disable slave connect websocket

adjust with_children param

modify devices to use correct executor (sleep, create_task)

support sleep and create_task in node

fix run async execution error

* bump version to 0.10.9

update registry

* PRCXI Reset Error Correction (#166)

* change 9320 desk row number to 4

* Updated 9320 host address

* Updated 9320 host address

* Add **kwargs in classes: PRCXI9300Deck and PRCXI9300Container

* Removed all sample_id in prcxi_9320.json to avoid KeyError

* 9320 machine testing settings

* Typo

* Rewrite setup logic to clear error code

* 初始化 step_mode 属性

* 1114物料手册定义教程byxinyu (#165)

* 宜宾奔耀工站deck前端by_Xinyu

* 构建物料教程byxinyu

* 1114物料手册定义教程

* 3d sim (#97)

* 修改lh的json启动

* 修改lh的json启动

* 修改backend,做成sim的通用backend

* 修改yaml的地址,3D模型适配网页生产环境

* 添加laiyu硬件连接

* 修改移液枪的状态判断方法,

修改移液枪的状态判断方法,
添加三轴的表定点与零点之间的转换
添加三轴真实移动的backend

* 修改laiyu移液站

简化移动方法,
取消软件限制位置,
修改当值使用Z轴时也需要重新复位Z轴的问题

* 更新lh以及laiyu workshop

1,现在可以直接通过修改backend,适配其他的移液站,主类依旧使用LiquidHandler,不用重新编写

2,修改枪头判断标准,使用枪头自身判断而不是类的判断,

3,将归零参数用毫米计算,方便手动调整,

4,修改归零方式,上电使用机械归零,确定机械零点,手动归零设置工作区域零点方便计算,二者互不干涉

* 修改枪头动作

* 修改虚拟仿真方法

---------

Co-authored-by: zhangshixiang <@zhangshixiang>
Co-authored-by: Junhan Chang <changjh@dp.tech>

* 标准化opcua设备接入unilab (#78)

* 初始提交,只保留工作区当前状态

* remove redundant arm_slider meshes

---------

Co-authored-by: Junhan Chang <changjh@dp.tech>

* add new laiyu liquid driver, yaml and json files (#164)

* HR物料同步,前端展示位置修复 (#135)

* 更新Bioyond工作站配置,添加新的物料类型映射和载架定义,优化物料查询逻辑

* 添加Bioyond实验配置文件,定义物料类型映射和设备配置

* 更新bioyond_warehouse_reagent_stack方法,修正试剂堆栈尺寸和布局描述

* 更新Bioyond实验配置,修正物料类型映射,优化设备配置

* 更新Bioyond资源同步逻辑,优化物料入库流程,增强错误处理和日志记录

* 更新Bioyond资源,添加配液站和反应站专用载架,优化仓库工厂函数的排序方式

* 更新Bioyond资源,添加配液站和反应站相关载架,优化试剂瓶和样品瓶配置

* 更新Bioyond实验配置,修正试剂瓶载架ID,确保与设备匹配

* 更新Bioyond资源,移除反应站单烧杯载架,添加反应站单烧瓶载架分类

* Refactor Bioyond resource synchronization and update bottle carrier definitions

- Removed traceback printing in error handling for Bioyond synchronization.
- Enhanced logging for existing Bioyond material ID usage during synchronization.
- Added new bottle carrier definitions for single flask and updated existing ones.
- Refactored dispensing station and reaction station bottle definitions for clarity and consistency.
- Improved resource mapping and error handling in graphio for Bioyond resource conversion.
- Introduced layout parameter in warehouse factory for better warehouse configuration.

* 更新Bioyond仓库工厂,添加排序方式支持,优化坐标计算逻辑

* 更新Bioyond载架和甲板配置,调整样品板尺寸和仓库坐标

* 更新Bioyond资源同步,增强占用位置日志信息,修正坐标转换逻辑

* 更新Bioyond反应站和分配站配置,调整材料类型映射和ID,移除不必要的项

* support name change during materials change

* fix json dumps

* correct tip

* 优化调度器API路径,更新相关方法描述

* 更新 BIOYOND 载架相关文档,调整 API 以支持自带试剂瓶的载架类型,修复资源获取时的子物料处理逻辑

* 实现资源删除时的同步处理,优化出库操作逻辑

* 修复 ItemizedCarrier 中的可见性逻辑

* 保存 Bioyond 原始信息到 unilabos_extra,以便出库时查询

* 根据 resource.capacity 判断是试剂瓶(载架)还是多瓶载架,走不同的奔曜转换

* Fix bioyond bottle_carriers ordering

* 优化 Bioyond 物料同步逻辑,增强坐标解析和位置更新处理

* disable slave connect websocket

* correct remove_resource stats

* change uuid logger to trace level

* enable slave mode

* refactor(bioyond): 统一资源命名并优化物料同步逻辑

- 将DispensingStation和ReactionStation资源统一为PolymerStation命名
- 优化物料同步逻辑,支持耗材类型(typeMode=0)的查询
- 添加物料默认参数配置功能
- 调整仓库坐标布局
- 清理废弃资源定义

* feat(warehouses): 为仓库函数添加col_offset和layout参数

* refactor: 更新实验配置中的物料类型映射命名

将DispensingStation和ReactionStation的物料类型映射统一更名为PolymerStation,保持命名一致性

* fix: 更新实验配置中的载体名称从6VialCarrier到6StockCarrier

* feat(bioyond): 实现物料创建与入库分离逻辑

将物料同步流程拆分为两个独立阶段:transfer阶段只创建物料,add阶段执行入库
简化状态检查接口,仅返回连接状态

* fix(reaction_station): 修正液体进料烧杯体积单位并增强返回结果

将液体进料烧杯的体积单位从μL改为g以匹配实际使用场景
在返回结果中添加merged_workflow和order_params字段,提供更完整的工作流信息

* feat(dispensing_station): 在任务创建返回结果中添加order_params信息

在create_order方法返回结果中增加order_params字段,以便调用方获取完整的任务参数

* fix(dispensing_station): 修改90%物料分配逻辑从分成3份改为直接使用

原逻辑将主称固体平均分成3份作为90%物料,现改为直接使用main_portion

* feat(bioyond): 添加任务编码和任务ID的输出,支持批量任务创建后的状态监控

* refactor(registry): 简化设备配置中的任务结果处理逻辑

将多个单独的任务编码和ID字段合并为统一的return_info字段
更新相关描述以反映新的数据结构

* feat(工作站): 添加HTTP报送服务和任务完成状态跟踪

- 在graphio.py中添加API必需字段
- 实现工作站HTTP服务启动和停止逻辑
- 添加任务完成状态跟踪字典和等待方法
- 重写任务完成报送处理方法记录状态
- 支持批量任务完成等待和报告获取

* refactor(dispensing_station): 移除wait_for_order_completion_and_get_report功能

该功能已被wait_for_multiple_orders_and_get_reports替代,简化代码结构

* fix: 更新任务报告API错误

* fix(workstation_http_service): 修复状态查询中device_id获取逻辑

处理状态查询时安全获取device_id,避免因属性不存在导致的异常

* fix(bioyond_studio): 改进物料入库失败时的错误处理和日志记录

在物料入库API调用失败时,添加更详细的错误信息打印
同时修正station.py中对空响应和失败情况的判断逻辑

* refactor(bioyond): 优化瓶架载体的分配逻辑和注释说明

重构瓶架载体的分配逻辑,使用嵌套循环替代硬编码索引分配
添加更详细的坐标映射说明,明确PLR与Bioyond坐标的对应关系

* fix(bioyond_rpc): 修复物料入库成功时无data字段返回空的问题

当API返回成功但无data字段时,返回包含success标识的字典而非空字典

---------

Co-authored-by: Xuwznln <18435084+Xuwznln@users.noreply.github.com>
Co-authored-by: Junhan Chang <changjh@dp.tech>

* nmr

* Update devices

* bump version to 0.10.10

* Update repo files.

* Add get_resource_with_dir & get_resource method

* fix camera & workstation & warehouse & reaction station driver

* update docs, test examples
fix liquid_handler init bug

* bump version to 0.10.11

* Add startup_json_path, disable_browser, port config

* Update oss config

* feat(bioyond_studio): 添加项目API接口支持及优化物料管理功能

添加通用项目API接口方法(_post_project_api, _delete_project_api)用于与LIMS系统交互
实现compute_experiment_design方法用于实验设计计算
新增brief_step_parameters等订单相关接口方法
优化物料转移逻辑,增加异步任务处理
扩展BioyondV1RPC类,添加批量物料操作、订单状态管理等功能

* feat(bioyond): 添加测量小瓶仓库和更新仓库工厂函数参数

* Support unilabos_samples key

* add session_id and normal_exit

* Add result schema and add TypedDict conversion.

* Fix port error

* Add backend api and update doc

* Add get_regular_container func

* Add get_regular_container func

* Transfer_liquid (#176)

* change 9320 desk row number to 4

* Updated 9320 host address

* Updated 9320 host address

* Add **kwargs in classes: PRCXI9300Deck and PRCXI9300Container

* Removed all sample_id in prcxi_9320.json to avoid KeyError

* 9320 machine testing settings

* Typo

* Typo in base_device_node.py

* Enhance liquid handling functionality by adding support for multiple transfer modes (one-to-many, one-to-one, many-to-one) and improving parameter validation. Default channel usage is set when not specified. Adjusted mixing logic to ensure it only occurs when valid conditions are met. Updated documentation for clarity.

* Auto dump logs, fix workstation input schema

* Fix startup with remote resource error

Resource dict fully change to "pose" key

Update oss link

Reduce pylabrobot conversion warning & force enable log dump.

更新 logo 图片

* signal when host node is ready

* fix ros2 future

print all logs to file
fix resource dict dump error

* update version to 0.10.12

* 修改sample_uuid的返回值

* 修改pose标签设定机制

* 添加 aspiate函数返回值

* 返回dispense后的sample_uuid

* 添加self.pending_liquids_dict的重置方法

* 修改prcxi的json文件,解决trach错误问题

* 修改prcxijson,防止PlateT4的硬件错误

* 对laiyu移液站进行部分修改,取消多次初始化的问题

* 修改根据新的物料格式,修改可视化

* 添加切换枪头方法,添加mock振荡与加热方法

* 夹爪添加

* 删除多余的laiyu部分

* 云端可启动夹爪

* Delete __init__.py

* Enhance PRCXI9300 classes with new Container and TipRack implementations, improving state management and initialization logic. Update JSON configuration to reflect type changes for containers and plates.

* 修改上传数据

---------

Co-authored-by: Junhan Chang <changjh@dp.tech>
Co-authored-by: ZiWei <131428629+ZiWei09@users.noreply.github.com>
Co-authored-by: Guangxin Zhang <guangxin.zhang.bio@gmail.com>
Co-authored-by: Xie Qiming <97236197+Andy6M@users.noreply.github.com>
Co-authored-by: h840473807 <47357934+h840473807@users.noreply.github.com>
Co-authored-by: LccLink <1951855008@qq.com>
Co-authored-by: lixinyu1011 <61094742+lixinyu1011@users.noreply.github.com>
Co-authored-by: shiyubo0410 <shiyubo@dp.tech>
Co-authored-by: hh.(SII) <103566763+Mile-Away@users.noreply.github.com>
Co-authored-by: Xianwei Qi <qxw@stu.pku.edu.cn>
Co-authored-by: WenzheG <wenzheguo32@gmail.com>
Co-authored-by: Harry Liu <113173203+ALITTLELZ@users.noreply.github.com>
Co-authored-by: q434343 <73513873+q434343@users.noreply.github.com>
Co-authored-by: tt <166512503+tt11142023@users.noreply.github.com>
Co-authored-by: xyc <49015816+xiaoyu10031@users.noreply.github.com>
Co-authored-by: zhangshixiang <@zhangshixiang>
Co-authored-by: zhangshixiang <554662886@qq.com>
Co-authored-by: ALITTLELZ <l_LZlz@163.com>
2025-12-26 02:28:56 +08:00
Xuwznln
9e1e6da505 Add topic config 2025-12-26 02:26:17 +08:00
xyc
8a0f000bab add camera driver (#191)
* add camera driver

* add init.py file to cameraSII driver
2025-12-23 18:41:43 +08:00
Xie Qiming
2ffeb49acb 增强新威电池测试系统 OSS 上传功能 / Enhanced Neware Battery Test System OSS Upload (#196)
* feat: neware-oss-upload-enhancement

* feat(neware): enhance OSS upload with metadata and workflow handles
2025-12-23 18:41:15 +08:00
Roy
5fec753fb9 Add post process station and related resources (#195)
* Add post process station and related resources

- Created JSON configuration for post_process_station and its child post_process_deck.
- Added YAML definitions for post_process_station, bottle carriers, bottles, and deck resources.
- Implemented Python classes for bottle carriers, bottles, decks, and warehouses to manage resources in the post process.
- Established a factory method for creating warehouses with customizable dimensions and layouts.
- Defined the structure and behavior of the post_process_deck and its associated warehouses.

* feat(post_process): add post_process_station and related warehouse functionality

- Introduced post_process_station.json to define the post-processing station structure.
- Implemented post_process_warehouse.py to create warehouse configurations with customizable layouts.
- Added warehouses.py for specific warehouse configurations (4x3x1).
- Updated post_process_station.yaml to reflect new module paths for OpcUaClient.
- Refactored bottle carriers and bottles YAML files to point to the new module paths.
- Adjusted deck.yaml to align with the new organizational structure for post_process_deck.
2025-12-23 18:40:09 +08:00
shuchang
acbaff7bb7 prcxi resource (#202)
* prcxi resource

* prcxi_resource

* Fix upload error not showing.
Support str type category.

---------

Co-authored-by: Xuwznln <18435084+Xuwznln@users.noreply.github.com>
2025-12-23 15:08:04 +08:00
Xuwznln
706323dc3e Merge remote-tracking branch 'origin/dev' into dev 2025-12-23 14:50:54 +08:00
Xuwznln
b0804d939c Fix upload error not showing.
Support str type category.
2025-12-23 14:50:35 +08:00
Xuwznln
d0ac452405 Update organic syn station.
(cherry picked from commit 13a6795657)
2025-12-15 02:34:51 +08:00
283 changed files with 343091 additions and 44400 deletions

61
.conda/base/recipe.yaml Normal file
View File

@@ -0,0 +1,61 @@
# unilabos: Production package (depends on unilabos-env + pip unilabos)
# For production deployment
package:
name: unilabos
version: 0.10.16
source:
path: ../../unilabos
target_directory: unilabos
build:
python:
entry_points:
- unilab = unilabos.app.main:main
script:
- set PIP_NO_INDEX=
- if: win
then:
- copy %RECIPE_DIR%\..\..\MANIFEST.in %SRC_DIR%
- copy %RECIPE_DIR%\..\..\setup.cfg %SRC_DIR%
- copy %RECIPE_DIR%\..\..\setup.py %SRC_DIR%
- pip install %SRC_DIR%
- if: unix
then:
- cp $RECIPE_DIR/../../MANIFEST.in $SRC_DIR
- cp $RECIPE_DIR/../../setup.cfg $SRC_DIR
- cp $RECIPE_DIR/../../setup.py $SRC_DIR
- uv pip install $SRC_DIR
requirements:
host:
- python ==3.11.14
- pip
- setuptools
- zstd
- zstandard
run:
- zstd
- zstandard
- networkx
- typing_extensions
- websockets
- opentrons_shared_data
- pint
- fastapi
- jinja2
- requests
- uvicorn
- opcua
- pyserial
- pandas
- pymodbus
- matplotlib
- pylibftdi
- uni-lab::unilabos-env ==0.10.16
about:
repository: https://github.com/deepmodeling/Uni-Lab-OS
license: GPL-3.0-only
description: "UniLabOS - Production package with minimal ROS2 dependencies"

View File

@@ -0,0 +1,39 @@
# unilabos-env: conda environment dependencies (ROS2 + conda packages)
package:
name: unilabos-env
version: 0.10.16
build:
noarch: generic
requirements:
run:
# Python
- zstd
- zstandard
- conda-forge::python ==3.11.14
- conda-forge::opencv
# ROS2 dependencies (from ci-check.yml)
- robostack-staging::ros-humble-ros-core
- robostack-staging::ros-humble-action-msgs
- robostack-staging::ros-humble-std-msgs
- robostack-staging::ros-humble-geometry-msgs
- robostack-staging::ros-humble-control-msgs
- robostack-staging::ros-humble-nav2-msgs
- robostack-staging::ros-humble-cv-bridge
- robostack-staging::ros-humble-vision-opencv
- robostack-staging::ros-humble-tf-transformations
- robostack-staging::ros-humble-moveit-msgs
- robostack-staging::ros-humble-tf2-ros
- robostack-staging::ros-humble-tf2-ros-py
- conda-forge::transforms3d
- conda-forge::uv
# UniLabOS custom messages
- uni-lab::ros-humble-unilabos-msgs
about:
repository: https://github.com/deepmodeling/Uni-Lab-OS
license: GPL-3.0-only
description: "UniLabOS Environment - ROS2 and conda dependencies (for developers: pip install -e .)"

42
.conda/full/recipe.yaml Normal file
View File

@@ -0,0 +1,42 @@
# unilabos-full: Full package with all features
# Depends on unilabos + complete ROS2 desktop + dev tools
package:
name: unilabos-full
version: 0.10.16
build:
noarch: generic
requirements:
run:
# Base unilabos package (includes unilabos-env)
- uni-lab::unilabos ==0.10.16
# Documentation tools
- sphinx
- sphinx_rtd_theme
# Web UI
- gradio
- flask
# Interactive development
- ipython
- jupyter
- jupyros
- colcon-common-extensions
# ROS2 full desktop (includes rviz2, gazebo, etc.)
- robostack-staging::ros-humble-desktop-full
# Navigation and motion control
- ros-humble-navigation2
- ros-humble-ros2-control
- ros-humble-robot-state-publisher
- ros-humble-joint-state-publisher
# MoveIt motion planning
- ros-humble-moveit
- ros-humble-moveit-servo
# Simulation
- ros-humble-simulation
about:
repository: https://github.com/deepmodeling/Uni-Lab-OS
license: GPL-3.0-only
description: "UniLabOS Full - Complete package with ROS2 Desktop, MoveIt, Navigation2, Gazebo, Jupyter"

View File

@@ -1,92 +0,0 @@
package:
name: unilabos
version: 0.10.12
source:
path: ../unilabos
target_directory: unilabos
build:
python:
entry_points:
- unilab = unilabos.app.main:main
script:
- set PIP_NO_INDEX=
- if: win
then:
- copy %RECIPE_DIR%\..\MANIFEST.in %SRC_DIR%
- copy %RECIPE_DIR%\..\setup.cfg %SRC_DIR%
- copy %RECIPE_DIR%\..\setup.py %SRC_DIR%
- call %PYTHON% -m pip install %SRC_DIR%
- if: unix
then:
- cp $RECIPE_DIR/../MANIFEST.in $SRC_DIR
- cp $RECIPE_DIR/../setup.cfg $SRC_DIR
- cp $RECIPE_DIR/../setup.py $SRC_DIR
- $PYTHON -m pip install $SRC_DIR
requirements:
host:
- python ==3.11.11
- pip
- setuptools
- zstd
- zstandard
run:
- conda-forge::python ==3.11.11
- compilers
- cmake
- zstd
- zstandard
- ninja
- if: unix
then:
- make
- sphinx
- sphinx_rtd_theme
- numpy
- scipy
- pandas
- networkx
- matplotlib
- pint
- pyserial
- pyusb
- pylibftdi
- pymodbus
- python-can
- pyvisa
- opencv
- pydantic
- fastapi
- uvicorn
- gradio
- flask
- websockets
- ipython
- jupyter
- jupyros
- colcon-common-extensions
- robostack-staging::ros-humble-desktop-full
- robostack-staging::ros-humble-control-msgs
- robostack-staging::ros-humble-sensor-msgs
- robostack-staging::ros-humble-trajectory-msgs
- ros-humble-navigation2
- ros-humble-ros2-control
- ros-humble-robot-state-publisher
- ros-humble-joint-state-publisher
- ros-humble-rosbridge-server
- ros-humble-cv-bridge
- ros-humble-tf2
- ros-humble-moveit
- ros-humble-moveit-servo
- ros-humble-simulation
- ros-humble-tf-transformations
- transforms3d
- uni-lab::ros-humble-unilabos-msgs
about:
repository: https://github.com/dptech-corp/Uni-Lab-OS
license: GPL-3.0-only
description: "Uni-Lab-OS"

View File

@@ -1,9 +0,0 @@
@echo off
setlocal enabledelayedexpansion
REM upgrade pip
"%PREFIX%\python.exe" -m pip install --upgrade pip
REM install extra deps
"%PREFIX%\python.exe" -m pip install paho-mqtt opentrons_shared_data
"%PREFIX%\python.exe" -m pip install git+https://github.com/Xuwznln/pylabrobot.git

View File

@@ -1,9 +0,0 @@
#!/usr/bin/env bash
set -euxo pipefail
# make sure pip is available
"$PREFIX/bin/python" -m pip install --upgrade pip
# install extra deps
"$PREFIX/bin/python" -m pip install paho-mqtt opentrons_shared_data
"$PREFIX/bin/python" -m pip install git+https://github.com/Xuwznln/pylabrobot.git

19
.github/dependabot.yml vendored Normal file
View File

@@ -0,0 +1,19 @@
version: 2
updates:
# GitHub Actions
- package-ecosystem: "github-actions"
directory: "/"
target-branch: "dev"
schedule:
interval: "weekly"
day: "monday"
time: "06:00"
open-pull-requests-limit: 5
reviewers:
- "msgcenterpy-team"
labels:
- "dependencies"
- "github-actions"
commit-message:
prefix: "ci"
include: "scope"

67
.github/workflows/ci-check.yml vendored Normal file
View File

@@ -0,0 +1,67 @@
name: CI Check
on:
push:
branches: [main, dev]
pull_request:
branches: [main, dev]
jobs:
registry-check:
runs-on: windows-latest
env:
# Fix Unicode encoding issue on Windows runner (cp1252 -> utf-8)
PYTHONIOENCODING: utf-8
PYTHONUTF8: 1
defaults:
run:
shell: cmd
steps:
- uses: actions/checkout@v6
with:
fetch-depth: 0
- name: Setup Miniforge
uses: conda-incubator/setup-miniconda@v3
with:
miniforge-version: latest
use-mamba: true
channels: robostack-staging,conda-forge,uni-lab
channel-priority: flexible
activate-environment: check-env
auto-update-conda: false
show-channel-urls: true
- name: Install ROS dependencies, uv and unilabos-msgs
run: |
echo Installing ROS dependencies...
mamba install -n check-env conda-forge::uv conda-forge::opencv robostack-staging::ros-humble-ros-core robostack-staging::ros-humble-action-msgs robostack-staging::ros-humble-std-msgs robostack-staging::ros-humble-geometry-msgs robostack-staging::ros-humble-control-msgs robostack-staging::ros-humble-nav2-msgs uni-lab::ros-humble-unilabos-msgs robostack-staging::ros-humble-cv-bridge robostack-staging::ros-humble-vision-opencv robostack-staging::ros-humble-tf-transformations robostack-staging::ros-humble-moveit-msgs robostack-staging::ros-humble-tf2-ros robostack-staging::ros-humble-tf2-ros-py conda-forge::transforms3d -c robostack-staging -c conda-forge -c uni-lab -y
- name: Install pip dependencies and unilabos
run: |
call conda activate check-env
echo Installing pip dependencies...
uv pip install -r unilabos/utils/requirements.txt
uv pip install pywinauto git+https://github.com/Xuwznln/pylabrobot.git
uv pip uninstall enum34 || echo enum34 not installed, skipping
uv pip install -e .
- name: Run check mode (complete_registry)
run: |
call conda activate check-env
echo Running check mode...
python -m unilabos --check_mode --skip_env_check
- name: Check for uncommitted changes
shell: bash
run: |
if ! git diff --exit-code; then
echo "::error::检测到文件变化!请先在本地运行 'python -m unilabos --complete_registry' 并提交变更"
echo "变化的文件:"
git diff --name-only
exit 1
fi
echo "检查通过:无文件变化"

View File

@@ -13,6 +13,11 @@ on:
required: false
default: 'win-64'
type: string
build_full:
description: '是否构建完整版 unilabos-full (默认构建轻量版 unilabos)'
required: false
default: false
type: boolean
jobs:
build-conda-pack:
@@ -24,7 +29,7 @@ jobs:
platform: linux-64
env_file: unilabos-linux-64.yaml
script_ext: sh
- os: macos-13 # Intel
- os: macos-15 # Intel (via Rosetta)
platform: osx-64
env_file: unilabos-osx-64.yaml
script_ext: sh
@@ -57,7 +62,7 @@ jobs:
echo "should_build=false" >> $GITHUB_OUTPUT
fi
- uses: actions/checkout@v4
- uses: actions/checkout@v6
if: steps.should_build.outputs.should_build == 'true'
with:
ref: ${{ github.event.inputs.branch }}
@@ -69,7 +74,7 @@ jobs:
with:
miniforge-version: latest
use-mamba: true
python-version: '3.11.11'
python-version: '3.11.14'
channels: conda-forge,robostack-staging,uni-lab,defaults
channel-priority: flexible
activate-environment: unilab
@@ -81,7 +86,14 @@ jobs:
run: |
echo Installing unilabos and dependencies to unilab environment...
echo Using mamba for faster and more reliable dependency resolution...
mamba install -n unilab uni-lab::unilabos conda-pack -c uni-lab -c robostack-staging -c conda-forge -y
echo Build full: ${{ github.event.inputs.build_full }}
if "${{ github.event.inputs.build_full }}"=="true" (
echo Installing unilabos-full ^(complete package^)...
mamba install -n unilab uni-lab::unilabos-full conda-pack -c uni-lab -c robostack-staging -c conda-forge -y
) else (
echo Installing unilabos ^(minimal package^)...
mamba install -n unilab uni-lab::unilabos conda-pack -c uni-lab -c robostack-staging -c conda-forge -y
)
- name: Install conda-pack, unilabos and dependencies (Unix)
if: steps.should_build.outputs.should_build == 'true' && matrix.platform != 'win-64'
@@ -89,7 +101,14 @@ jobs:
run: |
echo "Installing unilabos and dependencies to unilab environment..."
echo "Using mamba for faster and more reliable dependency resolution..."
mamba install -n unilab uni-lab::unilabos conda-pack -c uni-lab -c robostack-staging -c conda-forge -y
echo "Build full: ${{ github.event.inputs.build_full }}"
if [[ "${{ github.event.inputs.build_full }}" == "true" ]]; then
echo "Installing unilabos-full (complete package)..."
mamba install -n unilab uni-lab::unilabos-full conda-pack -c uni-lab -c robostack-staging -c conda-forge -y
else
echo "Installing unilabos (minimal package)..."
mamba install -n unilab uni-lab::unilabos conda-pack -c uni-lab -c robostack-staging -c conda-forge -y
fi
- name: Get latest ros-humble-unilabos-msgs version (Windows)
if: steps.should_build.outputs.should_build == 'true' && matrix.platform == 'win-64'
@@ -293,7 +312,7 @@ jobs:
- name: Upload distribution package
if: steps.should_build.outputs.should_build == 'true'
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@v6
with:
name: unilab-pack-${{ matrix.platform }}-${{ github.event.inputs.branch }}
path: dist-package/
@@ -308,7 +327,12 @@ jobs:
echo ==========================================
echo Platform: ${{ matrix.platform }}
echo Branch: ${{ github.event.inputs.branch }}
echo Python version: 3.11.11
echo Python version: 3.11.14
if "${{ github.event.inputs.build_full }}"=="true" (
echo Package: unilabos-full ^(complete^)
) else (
echo Package: unilabos ^(minimal^)
)
echo.
echo Distribution package contents:
dir dist-package
@@ -328,7 +352,12 @@ jobs:
echo "=========================================="
echo "Platform: ${{ matrix.platform }}"
echo "Branch: ${{ github.event.inputs.branch }}"
echo "Python version: 3.11.11"
echo "Python version: 3.11.14"
if [[ "${{ github.event.inputs.build_full }}" == "true" ]]; then
echo "Package: unilabos-full (complete)"
else
echo "Package: unilabos (minimal)"
fi
echo ""
echo "Distribution package contents:"
ls -lh dist-package/

View File

@@ -1,10 +1,12 @@
name: Deploy Docs
on:
push:
branches: [main]
pull_request:
# 在 CI Check 成功后自动触发(仅 main 分支)
workflow_run:
workflows: ["CI Check"]
types: [completed]
branches: [main]
# 手动触发
workflow_dispatch:
inputs:
branch:
@@ -33,12 +35,19 @@ concurrency:
jobs:
# Build documentation
build:
# 只在以下情况运行:
# 1. workflow_run 触发且 CI Check 成功
# 2. 手动触发
if: |
github.event_name == 'workflow_dispatch' ||
(github.event_name == 'workflow_run' && github.event.workflow_run.conclusion == 'success')
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
uses: actions/checkout@v6
with:
ref: ${{ github.event.inputs.branch || github.ref }}
# workflow_run 时使用触发工作流的分支,手动触发时使用输入的分支
ref: ${{ github.event.workflow_run.head_branch || github.event.inputs.branch || github.ref }}
fetch-depth: 0
- name: Setup Miniforge (with mamba)
@@ -46,7 +55,7 @@ jobs:
with:
miniforge-version: latest
use-mamba: true
python-version: '3.11.11'
python-version: '3.11.14'
channels: conda-forge,robostack-staging,uni-lab,defaults
channel-priority: flexible
activate-environment: unilab
@@ -75,8 +84,10 @@ jobs:
- name: Setup Pages
id: pages
uses: actions/configure-pages@v4
if: github.ref == 'refs/heads/main' || (github.event_name == 'workflow_dispatch' && github.event.inputs.deploy_to_pages == 'true')
uses: actions/configure-pages@v5
if: |
github.event.workflow_run.head_branch == 'main' ||
(github.event_name == 'workflow_dispatch' && github.event.inputs.deploy_to_pages == 'true')
- name: Build Sphinx documentation
run: |
@@ -94,14 +105,18 @@ jobs:
test -f docs/_build/html/index.html && echo "✓ index.html exists" || echo "✗ index.html missing"
- name: Upload build artifacts
uses: actions/upload-pages-artifact@v3
if: github.ref == 'refs/heads/main' || (github.event_name == 'workflow_dispatch' && github.event.inputs.deploy_to_pages == 'true')
uses: actions/upload-pages-artifact@v4
if: |
github.event.workflow_run.head_branch == 'main' ||
(github.event_name == 'workflow_dispatch' && github.event.inputs.deploy_to_pages == 'true')
with:
path: docs/_build/html
# Deploy to GitHub Pages
deploy:
if: github.ref == 'refs/heads/main' || (github.event_name == 'workflow_dispatch' && github.event.inputs.deploy_to_pages == 'true')
if: |
github.event.workflow_run.head_branch == 'main' ||
(github.event_name == 'workflow_dispatch' && github.event.inputs.deploy_to_pages == 'true')
environment:
name: github-pages
url: ${{ steps.deployment.outputs.page_url }}

View File

@@ -1,11 +1,16 @@
name: Multi-Platform Conda Build
on:
# 在 CI Check 工作流完成后触发(仅限 main/dev 分支)
workflow_run:
workflows: ["CI Check"]
types:
- completed
branches: [main, dev]
# 支持 tag 推送(不依赖 CI Check
push:
branches: [main, dev]
tags: ['v*']
pull_request:
branches: [main, dev]
# 手动触发
workflow_dispatch:
inputs:
platforms:
@@ -17,9 +22,37 @@ on:
required: false
default: false
type: boolean
skip_ci_check:
description: '跳过等待 CI Check (手动触发时可选)'
required: false
default: false
type: boolean
jobs:
# 等待 CI Check 完成的 job (仅用于 workflow_run 触发)
wait-for-ci:
runs-on: ubuntu-latest
if: github.event_name == 'workflow_run'
outputs:
should_continue: ${{ steps.check.outputs.should_continue }}
steps:
- name: Check CI status
id: check
run: |
if [[ "${{ github.event.workflow_run.conclusion }}" == "success" ]]; then
echo "should_continue=true" >> $GITHUB_OUTPUT
echo "CI Check passed, proceeding with build"
else
echo "should_continue=false" >> $GITHUB_OUTPUT
echo "CI Check did not succeed (status: ${{ github.event.workflow_run.conclusion }}), skipping build"
fi
build:
needs: [wait-for-ci]
# 运行条件workflow_run 触发且 CI 成功,或者其他触发方式
if: |
always() &&
(needs.wait-for-ci.result == 'skipped' || needs.wait-for-ci.outputs.should_continue == 'true')
strategy:
fail-fast: false
matrix:
@@ -27,7 +60,7 @@ jobs:
- os: ubuntu-latest
platform: linux-64
env_file: unilabos-linux-64.yaml
- os: macos-13 # Intel
- os: macos-15 # Intel (via Rosetta)
platform: osx-64
env_file: unilabos-osx-64.yaml
- os: macos-latest # ARM64
@@ -44,8 +77,10 @@ jobs:
shell: bash -l {0}
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v6
with:
# 如果是 workflow_run 触发,使用触发 CI Check 的 commit
ref: ${{ github.event.workflow_run.head_sha || github.ref }}
fetch-depth: 0
- name: Check if platform should be built
@@ -69,7 +104,6 @@ jobs:
channels: conda-forge,robostack-staging,defaults
channel-priority: strict
activate-environment: build-env
auto-activate-base: false
auto-update-conda: false
show-channel-urls: true
@@ -115,7 +149,7 @@ jobs:
- name: Upload conda package artifacts
if: steps.should_build.outputs.should_build == 'true'
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@v6
with:
name: conda-package-${{ matrix.platform }}
path: conda-packages-temp

View File

@@ -1,32 +1,69 @@
name: UniLabOS Conda Build
on:
# 在 CI Check 成功后自动触发
workflow_run:
workflows: ["CI Check"]
types: [completed]
branches: [main, dev]
# 标签推送时直接触发(发布版本)
push:
branches: [main, dev]
tags: ['v*']
pull_request:
branches: [main, dev]
# 手动触发
workflow_dispatch:
inputs:
platforms:
description: '选择构建平台 (逗号分隔): linux-64, osx-64, osx-arm64, win-64'
required: false
default: 'linux-64'
build_full:
description: '是否构建 unilabos-full 完整包 (默认只构建 unilabos 基础包)'
required: false
default: false
type: boolean
upload_to_anaconda:
description: '是否上传到Anaconda.org'
required: false
default: false
type: boolean
skip_ci_check:
description: '跳过等待 CI Check (手动触发时可选)'
required: false
default: false
type: boolean
jobs:
# 等待 CI Check 完成的 job (仅用于 workflow_run 触发)
wait-for-ci:
runs-on: ubuntu-latest
if: github.event_name == 'workflow_run'
outputs:
should_continue: ${{ steps.check.outputs.should_continue }}
steps:
- name: Check CI status
id: check
run: |
if [[ "${{ github.event.workflow_run.conclusion }}" == "success" ]]; then
echo "should_continue=true" >> $GITHUB_OUTPUT
echo "CI Check passed, proceeding with build"
else
echo "should_continue=false" >> $GITHUB_OUTPUT
echo "CI Check did not succeed (status: ${{ github.event.workflow_run.conclusion }}), skipping build"
fi
build:
needs: [wait-for-ci]
# 运行条件workflow_run 触发且 CI 成功,或者其他触发方式
if: |
always() &&
(needs.wait-for-ci.result == 'skipped' || needs.wait-for-ci.outputs.should_continue == 'true')
strategy:
fail-fast: false
matrix:
include:
- os: ubuntu-latest
platform: linux-64
- os: macos-13 # Intel
- os: macos-15 # Intel (via Rosetta)
platform: osx-64
- os: macos-latest # ARM64
platform: osx-arm64
@@ -40,8 +77,10 @@ jobs:
shell: bash -l {0}
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v6
with:
# 如果是 workflow_run 触发,使用触发 CI Check 的 commit
ref: ${{ github.event.workflow_run.head_sha || github.ref }}
fetch-depth: 0
- name: Check if platform should be built
@@ -65,7 +104,6 @@ jobs:
channels: conda-forge,robostack-staging,uni-lab,defaults
channel-priority: strict
activate-environment: build-env
auto-activate-base: false
auto-update-conda: false
show-channel-urls: true
@@ -81,12 +119,33 @@ jobs:
conda list | grep -E "(rattler-build|anaconda-client)"
echo "Platform: ${{ matrix.platform }}"
echo "OS: ${{ matrix.os }}"
echo "Building UniLabOS package"
echo "Build full package: ${{ github.event.inputs.build_full || 'false' }}"
echo "Building packages:"
echo " - unilabos-env (environment dependencies)"
echo " - unilabos (with pip package)"
if [[ "${{ github.event.inputs.build_full }}" == "true" ]]; then
echo " - unilabos-full (complete package)"
fi
- name: Build conda package
- name: Build unilabos-env (conda environment only, noarch)
if: steps.should_build.outputs.should_build == 'true'
run: |
rattler-build build -r .conda/recipe.yaml -c uni-lab -c robostack-staging -c conda-forge
echo "Building unilabos-env (conda environment dependencies)..."
rattler-build build -r .conda/environment/recipe.yaml -c uni-lab -c robostack-staging -c conda-forge
- name: Build unilabos (with pip package)
if: steps.should_build.outputs.should_build == 'true'
run: |
echo "Building unilabos package..."
rattler-build build -r .conda/base/recipe.yaml -c uni-lab -c robostack-staging -c conda-forge --channel ./output
- name: Build unilabos-full - Only when explicitly requested
if: |
steps.should_build.outputs.should_build == 'true' &&
github.event.inputs.build_full == 'true'
run: |
echo "Building unilabos-full package on ${{ matrix.platform }}..."
rattler-build build -r .conda/full/recipe.yaml -c uni-lab -c robostack-staging -c conda-forge --channel ./output
- name: List built packages
if: steps.should_build.outputs.should_build == 'true'
@@ -108,7 +167,7 @@ jobs:
- name: Upload conda package artifacts
if: steps.should_build.outputs.should_build == 'true'
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@v6
with:
name: conda-package-unilabos-${{ matrix.platform }}
path: conda-packages-temp

2
.gitignore vendored
View File

@@ -1,8 +1,10 @@
cursor_docs/
configs/
temp/
output/
unilabos_data/
pyrightconfig.json
.cursorignore
## Python
# Byte-compiled / optimized / DLL files

View File

@@ -1,4 +1,5 @@
recursive-include unilabos/test *
recursive-include unilabos/utils *
recursive-include unilabos/registry *.yaml
recursive-include unilabos/app/web/static *
recursive-include unilabos/app/web/templates *

17
NOTICE Normal file
View File

@@ -0,0 +1,17 @@
# Uni-Lab-OS Licensing Notice
This project uses a dual licensing structure:
## 1. Main Framework - GPL-3.0
- unilabos/ (except unilabos/devices/)
- docs/
- tests/
See [LICENSE](LICENSE) for details.
## 2. Device Drivers - DP Technology Proprietary License
- unilabos/devices/
See [unilabos/devices/LICENSE](unilabos/devices/LICENSE) for details.

View File

@@ -8,17 +8,13 @@
**English** | [中文](README_zh.md)
[![GitHub Stars](https://img.shields.io/github/stars/dptech-corp/Uni-Lab-OS.svg)](https://github.com/dptech-corp/Uni-Lab-OS/stargazers)
[![GitHub Forks](https://img.shields.io/github/forks/dptech-corp/Uni-Lab-OS.svg)](https://github.com/dptech-corp/Uni-Lab-OS/network/members)
[![GitHub Issues](https://img.shields.io/github/issues/dptech-corp/Uni-Lab-OS.svg)](https://github.com/dptech-corp/Uni-Lab-OS/issues)
[![GitHub License](https://img.shields.io/github/license/dptech-corp/Uni-Lab-OS.svg)](https://github.com/dptech-corp/Uni-Lab-OS/blob/main/LICENSE)
[![GitHub Stars](https://img.shields.io/github/stars/dptech-corp/Uni-Lab-OS.svg)](https://github.com/deepmodeling/Uni-Lab-OS/stargazers)
[![GitHub Forks](https://img.shields.io/github/forks/dptech-corp/Uni-Lab-OS.svg)](https://github.com/deepmodeling/Uni-Lab-OS/network/members)
[![GitHub Issues](https://img.shields.io/github/issues/dptech-corp/Uni-Lab-OS.svg)](https://github.com/deepmodeling/Uni-Lab-OS/issues)
[![GitHub License](https://img.shields.io/github/license/dptech-corp/Uni-Lab-OS.svg)](https://github.com/deepmodeling/Uni-Lab-OS/blob/main/LICENSE)
Uni-Lab-OS is a platform for laboratory automation, designed to connect and control various experimental equipment, enabling automation and standardization of experimental workflows.
## 🏆 Competition
Join the [Intelligent Organic Chemistry Synthesis Competition](https://bohrium.dp.tech/competitions/1451645258) to explore automated synthesis with Uni-Lab-OS!
## Key Features
- Multi-device integration management
@@ -31,41 +27,89 @@ Join the [Intelligent Organic Chemistry Synthesis Competition](https://bohrium.d
Detailed documentation can be found at:
- [Online Documentation](https://xuwznln.github.io/Uni-Lab-OS-Doc/)
- [Online Documentation](https://deepmodeling.github.io/Uni-Lab-OS/)
## Quick Start
Uni-Lab-OS recommends using `mamba` for environment management. Choose the appropriate environment file for your operating system:
### 1. Setup Conda Environment
Uni-Lab-OS recommends using `mamba` for environment management. Choose the package that fits your needs:
| Package | Use Case | Contents |
|---------|----------|----------|
| `unilabos` | **Recommended for most users** | Complete package, ready to use |
| `unilabos-env` | Developers (editable install) | Environment only, install unilabos via pip |
| `unilabos-full` | Simulation/Visualization | unilabos + ROS2 Desktop + Gazebo + MoveIt |
```bash
# Create new environment
mamba create -n unilab python=3.11.11
mamba create -n unilab python=3.11.14
mamba activate unilab
mamba install -n unilab uni-lab::unilabos -c robostack-staging -c conda-forge
# Option A: Standard installation (recommended for most users)
mamba install uni-lab::unilabos -c robostack-staging -c conda-forge
# Option B: For developers (editable mode development)
mamba install uni-lab::unilabos-env -c robostack-staging -c conda-forge
# Then install unilabos and dependencies:
git clone https://github.com/deepmodeling/Uni-Lab-OS.git && cd Uni-Lab-OS
pip install -e .
uv pip install -r unilabos/utils/requirements.txt
# Option C: Full installation (simulation/visualization)
mamba install uni-lab::unilabos-full -c robostack-staging -c conda-forge
```
## Install Dev Uni-Lab-OS
**When to use which?**
- **unilabos**: Standard installation for production deployment and general usage (recommended)
- **unilabos-env**: For developers who need `pip install -e .` editable mode, modify source code
- **unilabos-full**: For simulation (Gazebo), visualization (rviz2), and Jupyter notebooks
### 2. Clone Repository (Optional, for developers)
```bash
# Clone the repository
git clone https://github.com/dptech-corp/Uni-Lab-OS.git
# Clone the repository (only needed for development or examples)
git clone https://github.com/deepmodeling/Uni-Lab-OS.git
cd Uni-Lab-OS
# Install Uni-Lab-OS
pip install .
```
3. Start Uni-Lab System:
3. Start Uni-Lab System
Please refer to [Documentation - Boot Examples](https://xuwznln.github.io/Uni-Lab-OS-Doc/boot_examples/index.html)
Please refer to [Documentation - Boot Examples](https://deepmodeling.github.io/Uni-Lab-OS/boot_examples/index.html)
4. Best Practice
See [Best Practice Guide](https://deepmodeling.github.io/Uni-Lab-OS/user_guide/best_practice.html)
## Message Format
Uni-Lab-OS uses pre-built `unilabos_msgs` for system communication. You can find the built versions on the [GitHub Releases](https://github.com/dptech-corp/Uni-Lab-OS/releases) page.
Uni-Lab-OS uses pre-built `unilabos_msgs` for system communication. You can find the built versions on the [GitHub Releases](https://github.com/deepmodeling/Uni-Lab-OS/releases) page.
## Citation
If you use [Uni-Lab-OS](https://arxiv.org/abs/2512.21766) in academic research, please cite:
```bibtex
@article{gao2025unilabos,
title = {UniLabOS: An AI-Native Operating System for Autonomous Laboratories},
doi = {10.48550/arXiv.2512.21766},
publisher = {arXiv},
author = {Gao, Jing and Chang, Junhan and Que, Haohui and Xiong, Yanfei and
Zhang, Shixiang and Qi, Xianwei and Liu, Zhen and Wang, Jun-Jie and
Ding, Qianjun and Li, Xinyu and Pan, Ziwei and Xie, Qiming and
Yan, Zhuang and Yan, Junchi and Zhang, Linfeng},
year = {2025}
}
```
## License
This project is licensed under GPL-3.0 - see the [LICENSE](LICENSE) file for details.
This project uses a dual licensing structure:
- **Main Framework**: GPL-3.0 - see [LICENSE](LICENSE)
- **Device Drivers** (`unilabos/devices/`): DP Technology Proprietary License
See [NOTICE](NOTICE) for complete licensing details.
## Project Statistics
@@ -77,4 +121,4 @@ This project is licensed under GPL-3.0 - see the [LICENSE](LICENSE) file for det
## Contact Us
- GitHub Issues: [https://github.com/dptech-corp/Uni-Lab-OS/issues](https://github.com/dptech-corp/Uni-Lab-OS/issues)
- GitHub Issues: [https://github.com/deepmodeling/Uni-Lab-OS/issues](https://github.com/deepmodeling/Uni-Lab-OS/issues)

View File

@@ -8,17 +8,13 @@
[English](README.md) | **中文**
[![GitHub Stars](https://img.shields.io/github/stars/dptech-corp/Uni-Lab-OS.svg)](https://github.com/dptech-corp/Uni-Lab-OS/stargazers)
[![GitHub Forks](https://img.shields.io/github/forks/dptech-corp/Uni-Lab-OS.svg)](https://github.com/dptech-corp/Uni-Lab-OS/network/members)
[![GitHub Issues](https://img.shields.io/github/issues/dptech-corp/Uni-Lab-OS.svg)](https://github.com/dptech-corp/Uni-Lab-OS/issues)
[![GitHub License](https://img.shields.io/github/license/dptech-corp/Uni-Lab-OS.svg)](https://github.com/dptech-corp/Uni-Lab-OS/blob/main/LICENSE)
[![GitHub Stars](https://img.shields.io/github/stars/dptech-corp/Uni-Lab-OS.svg)](https://github.com/deepmodeling/Uni-Lab-OS/stargazers)
[![GitHub Forks](https://img.shields.io/github/forks/dptech-corp/Uni-Lab-OS.svg)](https://github.com/deepmodeling/Uni-Lab-OS/network/members)
[![GitHub Issues](https://img.shields.io/github/issues/dptech-corp/Uni-Lab-OS.svg)](https://github.com/deepmodeling/Uni-Lab-OS/issues)
[![GitHub License](https://img.shields.io/github/license/dptech-corp/Uni-Lab-OS.svg)](https://github.com/deepmodeling/Uni-Lab-OS/blob/main/LICENSE)
Uni-Lab-OS 是一个用于实验室自动化的综合平台,旨在连接和控制各种实验设备,实现实验流程的自动化和标准化。
## 🏆 比赛
欢迎参加[有机化学合成智能实验大赛](https://bohrium.dp.tech/competitions/1451645258),使用 Uni-Lab-OS 探索自动化合成!
## 核心特点
- 多设备集成管理
@@ -31,43 +27,89 @@ Uni-Lab-OS 是一个用于实验室自动化的综合平台,旨在连接和控
详细文档可在以下位置找到:
- [在线文档](https://xuwznln.github.io/Uni-Lab-OS-Doc/)
- [在线文档](https://deepmodeling.github.io/Uni-Lab-OS/)
## 快速开始
1. 配置 Conda 环境
### 1. 配置 Conda 环境
Uni-Lab-OS 建议使用 `mamba` 管理环境。根据您的操作系统选择适当的环境文件:
Uni-Lab-OS 建议使用 `mamba` 管理环境。根据您的需求选择合适的安装包:
| 安装包 | 适用场景 | 包含内容 |
|--------|----------|----------|
| `unilabos` | **推荐大多数用户** | 完整安装包,开箱即用 |
| `unilabos-env` | 开发者(可编辑安装) | 仅环境依赖,通过 pip 安装 unilabos |
| `unilabos-full` | 仿真/可视化 | unilabos + ROS2 桌面版 + Gazebo + MoveIt |
```bash
# 创建新环境
mamba create -n unilab python=3.11.11
mamba create -n unilab python=3.11.14
mamba activate unilab
mamba install -n unilab uni-lab::unilabos -c robostack-staging -c conda-forge
# 方案 A标准安装推荐大多数用户
mamba install uni-lab::unilabos -c robostack-staging -c conda-forge
# 方案 B开发者环境可编辑模式开发
mamba install uni-lab::unilabos-env -c robostack-staging -c conda-forge
# 然后安装 unilabos 和依赖:
git clone https://github.com/deepmodeling/Uni-Lab-OS.git && cd Uni-Lab-OS
pip install -e .
uv pip install -r unilabos/utils/requirements.txt
# 方案 C完整安装仿真/可视化)
mamba install uni-lab::unilabos-full -c robostack-staging -c conda-forge
```
2. 安装开发版 Uni-Lab-OS:
**如何选择?**
- **unilabos**:标准安装,适用于生产部署和日常使用(推荐)
- **unilabos-env**:开发者使用,支持 `pip install -e .` 可编辑模式,可修改源代码
- **unilabos-full**需要仿真Gazebo、可视化rviz2或 Jupyter Notebook
### 2. 克隆仓库(可选,供开发者使用)
```bash
# 克隆仓库
git clone https://github.com/dptech-corp/Uni-Lab-OS.git
# 克隆仓库(仅开发或查看示例时需要)
git clone https://github.com/deepmodeling/Uni-Lab-OS.git
cd Uni-Lab-OS
# 安装 Uni-Lab-OS
pip install .
```
3. 启动 Uni-Lab 系统:
3. 启动 Uni-Lab 系统
请见[文档-启动样例](https://xuwznln.github.io/Uni-Lab-OS-Doc/boot_examples/index.html)
请见[文档-启动样例](https://deepmodeling.github.io/Uni-Lab-OS/boot_examples/index.html)
4. 最佳实践
请见[最佳实践指南](https://deepmodeling.github.io/Uni-Lab-OS/user_guide/best_practice.html)
## 消息格式
Uni-Lab-OS 使用预构建的 `unilabos_msgs` 进行系统通信。您可以在 [GitHub Releases](https://github.com/dptech-corp/Uni-Lab-OS/releases) 页面找到已构建的版本。
Uni-Lab-OS 使用预构建的 `unilabos_msgs` 进行系统通信。您可以在 [GitHub Releases](https://github.com/deepmodeling/Uni-Lab-OS/releases) 页面找到已构建的版本。
## 引用
如果您在学术研究中使用 [Uni-Lab-OS](https://arxiv.org/abs/2512.21766),请引用:
```bibtex
@article{gao2025unilabos,
title = {UniLabOS: An AI-Native Operating System for Autonomous Laboratories},
doi = {10.48550/arXiv.2512.21766},
publisher = {arXiv},
author = {Gao, Jing and Chang, Junhan and Que, Haohui and Xiong, Yanfei and
Zhang, Shixiang and Qi, Xianwei and Liu, Zhen and Wang, Jun-Jie and
Ding, Qianjun and Li, Xinyu and Pan, Ziwei and Xie, Qiming and
Yan, Zhuang and Yan, Junchi and Zhang, Linfeng},
year = {2025}
}
```
## 许可证
项目采用 GPL-3.0 许可 - 详情请参阅 [LICENSE](LICENSE) 文件。
项目采用双许可证结构:
- **主框架**GPL-3.0 - 详见 [LICENSE](LICENSE)
- **设备驱动** (`unilabos/devices/`):深势科技专有许可证
完整许可证说明请参阅 [NOTICE](NOTICE)。
## 项目统计
@@ -79,4 +121,4 @@ Uni-Lab-OS 使用预构建的 `unilabos_msgs` 进行系统通信。您可以在
## 联系我们
- GitHub Issues: [https://github.com/dptech-corp/Uni-Lab-OS/issues](https://github.com/dptech-corp/Uni-Lab-OS/issues)
- GitHub Issues: [https://github.com/deepmodeling/Uni-Lab-OS/issues](https://github.com/deepmodeling/Uni-Lab-OS/issues)

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -24,7 +24,7 @@ extensions = [
"sphinx.ext.autodoc",
"sphinx.ext.napoleon", # 如果您使用 Google 或 NumPy 风格的 docstrings
"sphinx_rtd_theme",
"sphinxcontrib.mermaid"
"sphinxcontrib.mermaid",
]
source_suffix = {
@@ -58,7 +58,7 @@ html_theme = "sphinx_rtd_theme"
# sphinx-book-theme 主题选项
html_theme_options = {
"repository_url": "https://github.com/用户名/Uni-Lab",
"repository_url": "https://github.com/deepmodeling/Uni-Lab-OS",
"use_repository_button": True,
"use_issues_button": True,
"use_edit_page_button": True,

File diff suppressed because it is too large Load Diff

View File

@@ -12,3 +12,7 @@ sphinx-copybutton>=0.5.0
# 用于自动摘要生成
sphinx-autobuild>=2024.2.4
# 用于PDF导出 (rinohtype方案纯Python无需LaTeX)
rinohtype>=0.5.4
sphinx-simplepdf>=1.6.0

View File

@@ -31,6 +31,14 @@
详细的安装步骤请参考 [安装指南](installation.md)。
**选择合适的安装包:**
| 安装包 | 适用场景 | 包含组件 |
|--------|----------|----------|
| `unilabos` | **推荐大多数用户**,生产部署 | 完整安装包,开箱即用 |
| `unilabos-env` | 开发者(可编辑安装) | 仅环境依赖,通过 pip 安装 unilabos |
| `unilabos-full` | 仿真/可视化 | unilabos + 完整 ROS2 桌面版 + Gazebo + MoveIt |
**关键步骤:**
```bash
@@ -38,15 +46,30 @@
# 下载 Miniforge: https://github.com/conda-forge/miniforge/releases
# 2. 创建 Conda 环境
mamba create -n unilab python=3.11.11
mamba create -n unilab python=3.11.14
# 3. 激活环境
mamba activate unilab
# 4. 安装 Uni-Lab-OS
# 4. 安装 Uni-Lab-OS(选择其一)
# 方案 A标准安装推荐大多数用户
mamba install uni-lab::unilabos -c robostack-staging -c conda-forge
# 方案 B开发者环境可编辑模式开发
mamba install uni-lab::unilabos-env -c robostack-staging -c conda-forge
pip install -e /path/to/Uni-Lab-OS # 可编辑安装
uv pip install -r unilabos/utils/requirements.txt # 安装 pip 依赖
# 方案 C完整版仿真/可视化)
mamba install uni-lab::unilabos-full -c robostack-staging -c conda-forge
```
**选择建议:**
- **日常使用/生产部署**:使用 `unilabos`(推荐),完整功能,开箱即用
- **开发者**:使用 `unilabos-env` + `pip install -e .` + `uv pip install -r unilabos/utils/requirements.txt`,代码修改立即生效
- **仿真/可视化**:使用 `unilabos-full`,含 Gazebo、rviz2、MoveIt
#### 1.2 验证安装
```bash
@@ -768,7 +791,43 @@ Waiting for host service...
详细的设备驱动编写指南请参考 [添加设备驱动](../developer_guide/add_device.md)。
#### 9.1 为什么需要自定义设备?
#### 9.1 开发环境准备
**推荐使用 `unilabos-env` + `pip install -e .` + `uv pip install`** 进行设备开发:
```bash
# 1. 创建环境并安装 unilabos-envROS2 + conda 依赖 + uv
mamba create -n unilab python=3.11.14
conda activate unilab
mamba install uni-lab::unilabos-env -c robostack-staging -c conda-forge
# 2. 克隆代码
git clone https://github.com/deepmodeling/Uni-Lab-OS.git
cd Uni-Lab-OS
# 3. 以可编辑模式安装(推荐使用脚本,自动检测中文环境)
python scripts/dev_install.py
# 或手动安装:
pip install -e .
uv pip install -r unilabos/utils/requirements.txt
```
**为什么使用这种方式?**
- `unilabos-env` 提供 ROS2 核心组件和 uv通过 conda 安装,避免编译)
- `unilabos/utils/requirements.txt` 包含所有运行时需要的 pip 依赖
- `dev_install.py` 自动检测中文环境,中文系统自动使用清华镜像
- 使用 `uv` 替代 `pip`,安装速度更快
- 可编辑模式:代码修改**立即生效**,无需重新安装
**如果安装失败或速度太慢**,可以手动执行(使用清华镜像):
```bash
pip install -e . -i https://mirrors.tuna.tsinghua.edu.cn/pypi/web/simple
uv pip install -r unilabos/utils/requirements.txt -i https://mirrors.tuna.tsinghua.edu.cn/pypi/web/simple
```
#### 9.2 为什么需要自定义设备?
Uni-Lab-OS 内置了常见设备,但您的实验室可能有特殊设备需要集成:
@@ -777,7 +836,7 @@ Uni-Lab-OS 内置了常见设备,但您的实验室可能有特殊设备需要
- 特殊的实验流程
- 第三方设备集成
#### 9.2 创建 Python 包
#### 9.3 创建 Python 包
为了方便开发和管理,建议为您的实验室创建独立的 Python 包。
@@ -814,7 +873,7 @@ touch my_lab_devices/my_lab_devices/__init__.py
touch my_lab_devices/my_lab_devices/devices/__init__.py
```
#### 9.3 创建 setup.py
#### 9.4 创建 setup.py
```python
# my_lab_devices/setup.py
@@ -845,7 +904,7 @@ setup(
)
```
#### 9.4 开发安装
#### 9.5 开发安装
使用 `-e` 参数进行可编辑安装,这样代码修改后立即生效:
@@ -860,7 +919,7 @@ pip install -e . -i https://mirrors.tuna.tsinghua.edu.cn/pypi/web/simple
- 方便调试和测试
- 支持版本控制git
#### 9.5 编写设备驱动
#### 9.6 编写设备驱动
创建设备驱动文件:
@@ -1001,7 +1060,7 @@ class MyPump:
- **返回 Dict**:所有动作方法返回字典类型
- **文档字符串**:详细说明参数和功能
#### 9.6 测试设备驱动
#### 9.7 测试设备驱动
创建简单的测试脚本:
@@ -1807,7 +1866,7 @@ unilab --ak your_ak --sk your_sk -g graph.json \
#### 14.5 社区支持
- **GitHub Issues**[https://github.com/dptech-corp/Uni-Lab-OS/issues](https://github.com/dptech-corp/Uni-Lab-OS/issues)
- **GitHub Issues**[https://github.com/deepmodeling/Uni-Lab-OS/issues](https://github.com/deepmodeling/Uni-Lab-OS/issues)
- **官方网站**[https://uni-lab.bohrium.com](https://uni-lab.bohrium.com)
---

View File

@@ -463,7 +463,7 @@ Uni-Lab 使用 `ResourceDictInstance.get_resource_instance_from_dict()` 方法
### 使用示例
```python
from unilabos.ros.nodes.resource_tracker import ResourceDictInstance
from unilabos.resources.resource_tracker import ResourceDictInstance
# 旧格式节点
old_format_node = {
@@ -477,10 +477,10 @@ old_format_node = {
instance = ResourceDictInstance.get_resource_instance_from_dict(old_format_node)
# 访问标准化后的数据
print(instance.res_content.id) # "pump_1"
print(instance.res_content.uuid) # 自动生成的 UUID
print(instance.res_content.id) # "pump_1"
print(instance.res_content.uuid) # 自动生成的 UUID
print(instance.res_content.config) # {}
print(instance.res_content.data) # {}
print(instance.res_content.data) # {}
```
### 格式迁移建议
@@ -857,4 +857,4 @@ class ResourceDictPosition(BaseModel):
- 在 Web 界面中使用模板创建
- 参考示例文件:`test/experiments/` 目录
- 查看 ResourceDict 源码了解完整定义
- [GitHub 讨论区](https://github.com/dptech-corp/Uni-Lab-OS/discussions)
- [GitHub 讨论区](https://github.com/deepmodeling/Uni-Lab-OS/discussions)

View File

@@ -13,15 +13,26 @@
- 开发者需要 Git 和基本的 Python 开发知识
- 自定义 msgs 需要 GitHub 账号
## 安装包选择
Uni-Lab-OS 提供三个安装包版本,根据您的需求选择:
| 安装包 | 适用场景 | 包含组件 | 磁盘占用 |
|--------|----------|----------|----------|
| **unilabos** | **推荐大多数用户**,生产部署 | 完整安装包,开箱即用 | ~2-3 GB |
| **unilabos-env** | 开发者环境(可编辑安装) | 仅环境依赖,通过 pip 安装 unilabos | ~2 GB |
| **unilabos-full** | 仿真可视化、完整功能体验 | unilabos + 完整 ROS2 桌面版 + Gazebo + MoveIt | ~8-10 GB |
## 安装方式选择
根据您的使用场景,选择合适的安装方式:
| 安装方式 | 适用人群 | 特点 | 安装时间 |
| ---------------------- | -------------------- | ------------------------------ | ---------------------------- |
| **方式一:一键安装** | 实验室用户、快速体验 | 预打包环境,离线可用,无需配置 | 5-10 分钟 (网络良好的情况下) |
| **方式二:手动安装** | 标准用户、生产环境 | 灵活配置,版本可控 | 10-20 分钟 |
| **方式三:开发者安装** | 开发者、需要修改源码 | 可编辑模式,支持自定义 msgs | 20-30 分钟 |
| 安装方式 | 适用人群 | 推荐安装包 | 特点 | 安装时间 |
| ---------------------- | -------------------- | ----------------- | ------------------------------ | ---------------------------- |
| **方式一:一键安装** | 快速体验、演示 | 预打包环境 | 离线可用,无需配置 | 5-10 分钟 (网络良好的情况下) |
| **方式二:手动安装** | **大多数用户** | `unilabos` | 完整功能,开箱即用 | 10-20 分钟 |
| **方式三:开发者安装** | 开发者、需要修改源码 | `unilabos-env` | 可编辑模式,支持自定义开发 | 20-30 分钟 |
| **仿真/可视化** | 仿真测试、可视化调试 | `unilabos-full` | 含 Gazebo、rviz2、MoveIt | 30-60 分钟 |
---
@@ -37,7 +48,7 @@
#### 第一步:下载预打包环境
1. 访问 [GitHub Actions - Conda Pack Build](https://github.com/dptech-corp/Uni-Lab-OS/actions/workflows/conda-pack-build.yml)
1. 访问 [GitHub Actions - Conda Pack Build](https://github.com/deepmodeling/Uni-Lab-OS/actions/workflows/conda-pack-build.yml)
2. 选择最新的成功构建记录(绿色勾号 ✓)
@@ -144,17 +155,38 @@ bash Miniforge3-$(uname)-$(uname -m).sh
使用以下命令创建 Uni-Lab 专用环境:
```bash
mamba create -n unilab python=3.11.11 # 目前ros2组件依赖版本大多为3.11.11
mamba create -n unilab python=3.11.14 # 目前ros2组件依赖版本大多为3.11.14
mamba activate unilab
mamba install -n unilab uni-lab::unilabos -c robostack-staging -c conda-forge
# 选择安装包(三选一):
# 方案 A标准安装推荐大多数用户
mamba install uni-lab::unilabos -c robostack-staging -c conda-forge
# 方案 B开发者环境可编辑模式开发
mamba install uni-lab::unilabos-env -c robostack-staging -c conda-forge
# 然后安装 unilabos 和 pip 依赖:
git clone https://github.com/deepmodeling/Uni-Lab-OS.git && cd Uni-Lab-OS
pip install -e .
uv pip install -r unilabos/utils/requirements.txt
# 方案 C完整版含仿真和可视化工具
mamba install uni-lab::unilabos-full -c robostack-staging -c conda-forge
```
**参数说明**:
- `-n unilab`: 创建名为 "unilab" 的环境
- `uni-lab::unilabos`: 从 uni-lab channel 安装 unilabos 包
- `uni-lab::unilabos`: 安装 unilabos 完整包,开箱即用(推荐)
- `uni-lab::unilabos-env`: 仅安装环境依赖,适合开发者使用 `pip install -e .`
- `uni-lab::unilabos-full`: 安装完整包(含 ROS2 Desktop、Gazebo、MoveIt 等)
- `-c robostack-staging -c conda-forge`: 添加额外的软件源
**包选择建议**
- **日常使用/生产部署**:安装 `unilabos`(推荐,完整功能,开箱即用)
- **开发者**:安装 `unilabos-env`,然后使用 `uv pip install -r unilabos/utils/requirements.txt` 安装依赖,再 `pip install -e .` 进行可编辑安装
- **仿真/可视化**:安装 `unilabos-full`Gazebo、rviz2、MoveIt
**如果遇到网络问题**,可以使用清华镜像源加速下载:
```bash
@@ -163,8 +195,14 @@ mamba config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/m
mamba config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free/
mamba config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge/
# 然后重新执行安装命令
# 然后重新执行安装命令(推荐标准安装)
mamba create -n unilab uni-lab::unilabos -c robostack-staging
# 或完整版(仿真/可视化)
mamba create -n unilab uni-lab::unilabos-full -c robostack-staging
# pip 安装时使用清华镜像(开发者安装时使用)
uv pip install -r unilabos/utils/requirements.txt -i https://mirrors.tuna.tsinghua.edu.cn/pypi/web/simple
```
### 第三步:激活环境
@@ -189,13 +227,13 @@ conda activate unilab
### 第一步:克隆仓库
```bash
git clone https://github.com/dptech-corp/Uni-Lab-OS.git
git clone https://github.com/deepmodeling/Uni-Lab-OS.git
cd Uni-Lab-OS
```
如果您需要贡献代码,建议先 Fork 仓库:
1. 访问 https://github.com/dptech-corp/Uni-Lab-OS
1. 访问 https://github.com/deepmodeling/Uni-Lab-OS
2. 点击右上角的 "Fork" 按钮
3. Clone 您的 Fork 版本:
```bash
@@ -203,58 +241,87 @@ cd Uni-Lab-OS
cd Uni-Lab-OS
```
### 第二步:安装基础环境
### 第二步:安装开发环境unilabos-env
**推荐方式**:先通过**方式一(一键安装)**或**方式二(手动安装)**完成基础环境的安装这将包含所有必需的依赖项ROS2、msgs 等)。
#### 选项 A通过一键安装推荐
参考上文"方式一:一键安装",完成基础环境的安装后,激活环境:
**重要**:开发者请使用 `unilabos-env` 包,它专为开发者设计:
- 包含 ROS2 核心组件和消息包ros-humble-ros-core、std-msgs、geometry-msgs 等)
- 包含 transforms3d、cv-bridge、tf2 等 conda 依赖
- 包含 `uv` 工具,用于快速安装 pip 依赖
- **不包含** pip 依赖和 unilabos 包(由 `pip install -e .` 和 `uv pip install` 安装)
```bash
# 创建并激活环境
mamba create -n unilab python=3.11.14
conda activate unilab
# 安装开发者环境包ROS2 + conda 依赖 + uv
mamba install uni-lab::unilabos-env -c robostack-staging -c conda-forge
```
#### 选项 B通过手动安装
### 第三步:安装 pip 依赖和可编辑模式安装
参考上文"方式二:手动安装",创建并安装环境
```bash
mamba create -n unilab python=3.11.11
conda activate unilab
mamba install -n unilab uni-lab::unilabos -c robostack-staging -c conda-forge
```
**说明**:这会安装包括 Python 3.11.11、ROS2 Humble、ros-humble-unilabos-msgs 和所有必需依赖
### 第三步:切换到开发版本
现在你已经有了一个完整可用的 Uni-Lab 环境,接下来将 unilabos 包切换为开发版本:
克隆代码并安装依赖
```bash
# 确保环境已激活
conda activate unilab
# 卸载 pip 安装的 unilabos保留所有 conda 依赖
pip uninstall unilabos -y
# 克隆 dev 分支(如果还未克隆)
cd /path/to/your/workspace
git clone -b dev https://github.com/dptech-corp/Uni-Lab-OS.git
# 或者如果已经克隆,切换到 dev 分支
# 克隆仓库(如果还未克隆
git clone https://github.com/deepmodeling/Uni-Lab-OS.git
cd Uni-Lab-OS
# 切换到 dev 分支(可选)
git checkout dev
git pull
# 以可编辑模式安装开发版 unilabos
pip install -e . -i https://mirrors.tuna.tsinghua.edu.cn/pypi/web/simple
```
**参数说明**
**推荐:使用安装脚本**(自动检测中文环境,使用 uv 加速)
- `-e`: editable mode可编辑模式代码修改立即生效无需重新安装
- `-i`: 使用清华镜像源加速下载
- `pip uninstall unilabos`: 只卸载 pip 安装的 unilabos 包,不影响 conda 安装的其他依赖(如 ROS2、msgs 等)
```bash
# 自动检测中文环境,如果是中文系统则使用清华镜像
python scripts/dev_install.py
# 或者手动指定:
python scripts/dev_install.py --china # 强制使用清华镜像
python scripts/dev_install.py --no-mirror # 强制使用 PyPI
python scripts/dev_install.py --skip-deps # 跳过 pip 依赖安装
python scripts/dev_install.py --use-pip # 使用 pip 而非 uv
```
**手动安装**(如果脚本安装失败或速度太慢):
```bash
# 1. 安装 unilabos可编辑模式
pip install -e .
# 2. 使用 uv 安装 pip 依赖(推荐,速度更快)
uv pip install -r unilabos/utils/requirements.txt
# 国内用户使用清华镜像:
pip install -e . -i https://mirrors.tuna.tsinghua.edu.cn/pypi/web/simple
uv pip install -r unilabos/utils/requirements.txt -i https://mirrors.tuna.tsinghua.edu.cn/pypi/web/simple
```
**注意**
- `uv` 已包含在 `unilabos-env` 中,无需单独安装
- `unilabos/utils/requirements.txt` 包含运行 unilabos 所需的所有 pip 依赖
- 部分特殊包(如 pylabrobot会在运行时由 unilabos 自动检测并安装
**为什么使用可编辑模式?**
- `-e` (editable mode):代码修改**立即生效**,无需重新安装
- 适合开发调试:修改代码后直接运行测试
- 与 `unilabos-env` 配合:环境依赖由 conda 管理unilabos 代码由 pip 管理
**验证安装**
```bash
# 检查 unilabos 版本
python -c "import unilabos; print(unilabos.__version__)"
# 检查安装位置(应该指向你的代码目录)
pip show unilabos | grep Location
```
### 第四步:安装或自定义 ros-humble-unilabos-msgs可选
@@ -464,7 +531,45 @@ cd $CONDA_PREFIX/envs/unilab
### 问题 8: 环境很大,有办法减小吗?
**解决方案**: 预打包的环境包含所有依赖,通常较大(压缩后 2-5GB。这是为了确保离线安装和完整功能。如果空间有限考虑使用方式二手动安装只安装需要的组件。
**解决方案**:
1. **使用 `unilabos` 标准版**(推荐大多数用户):
```bash
mamba install uni-lab::unilabos -c robostack-staging -c conda-forge
```
标准版包含完整功能,环境大小约 2-3GB相比完整版的 8-10GB
2. **使用 `unilabos-env` 开发者版**(最小化):
```bash
mamba install uni-lab::unilabos-env -c robostack-staging -c conda-forge
# 然后手动安装依赖
pip install -e .
uv pip install -r unilabos/utils/requirements.txt
```
开发者版只包含环境依赖,体积最小约 2GB。
3. **按需安装额外组件**
如果后续需要特定功能,可以单独安装:
```bash
# 需要 Jupyter
mamba install jupyter jupyros
# 需要可视化
mamba install matplotlib opencv
# 需要仿真(注意:这会安装大量依赖)
mamba install ros-humble-gazebo-ros
```
4. **预打包环境问题**
预打包环境(方式一)包含所有依赖,通常较大(压缩后 2-5GB。这是为了确保离线安装和完整功能。
**包选择建议**
| 需求 | 推荐包 | 预估大小 |
|------|--------|----------|
| 日常使用/生产部署 | `unilabos` | ~2-3 GB |
| 开发调试(可编辑模式) | `unilabos-env` | ~2 GB |
| 仿真/可视化 | `unilabos-full` | ~8-10 GB |
### 问题 9: 如何更新到最新版本?
@@ -503,14 +608,15 @@ mamba update ros-humble-unilabos-msgs -c uni-lab -c robostack-staging -c conda-f
## 需要帮助?
- **故障排查**: 查看更详细的故障排查信息
- **GitHub Issues**: [报告问题](https://github.com/dptech-corp/Uni-Lab-OS/issues)
- **GitHub Issues**: [报告问题](https://github.com/deepmodeling/Uni-Lab-OS/issues)
- **开发者文档**: 查看开发者指南获取更多技术细节
- **社区讨论**: [GitHub Discussions](https://github.com/dptech-corp/Uni-Lab-OS/discussions)
- **社区讨论**: [GitHub Discussions](https://github.com/deepmodeling/Uni-Lab-OS/discussions)
---
**提示**:
- 生产环境推荐使用方式二(手动安装)的稳定版本
- 开发和测试推荐使用方式三(开发者安装)
- 快速体验和演示推荐使用方式一(一键安装)
- **大多数用户**推荐使用方式二(手动安装)的 `unilabos` 标准版
- **开发者**推荐使用方式三(开发者安装),安装 `unilabos-env` 后使用 `uv pip install -r unilabos/utils/requirements.txt` 安装依赖
- **仿真/可视化**推荐安装 `unilabos-full` 完整版
- **快速体验和演示**推荐使用方式一(一键安装)

View File

@@ -1,32 +0,0 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import re
filepath = r'd:\UniLab\Uni-Lab-OS\unilabos\device_comms\modbus_plc\modbus.py'
with open(filepath, 'r', encoding='utf-8') as f:
content = f.read()
# Replace the DataType placeholder with actual enum
find_pattern = r'# DataType will be accessed via client instance.*?DataType = None # Placeholder.*?\n'
replacement = '''# Define DataType enum for pymodbus 2.5.3 compatibility
class DataType(Enum):
INT16 = "int16"
UINT16 = "uint16"
INT32 = "int32"
UINT32 = "uint32"
INT64 = "int64"
UINT64 = "uint64"
FLOAT32 = "float32"
FLOAT64 = "float64"
STRING = "string"
BOOL = "bool"
'''
new_content = re.sub(find_pattern, replacement, content, flags=re.DOTALL)
with open(filepath, 'w', encoding='utf-8') as f:
f.write(new_content)
print('File updated successfully!')

View File

@@ -1,54 +0,0 @@
{
"nodes": [
{
"id": "BatteryStation",
"name": "扣电工作站",
"parent": null,
"children": [
"coin_cell_deck"
],
"type": "device",
"class":"coincellassemblyworkstation_device",
"position": {
"x": 0,
"y": 0,
"z": 0
},
"config": {
"deck": {
"data": {
"_resource_child_name": "YB_YH_Deck",
"_resource_type": "unilabos.devices.workstation.coin_cell_assembly.YB_YH_materials:CoincellDeck"
}
},
"debug_mode": true,
"protocol_type": []
}
},
{
"id": "YB_YH_Deck",
"name": "YB_YH_Deck",
"children": [],
"parent": "BatteryStation",
"type": "deck",
"class": "CoincellDeck",
"position": {
"x": 0,
"y": 0,
"z": 0
},
"config": {
"type": "CoincellDeck",
"setup": true,
"rotation": {
"x": 0,
"y": 0,
"z": 0,
"type": "Rotation"
}
},
"data": {}
}
],
"links": []
}

View File

@@ -1,98 +0,0 @@
{
"nodes": [
{
"id": "bioyond_cell_workstation",
"name": "配液分液工站",
"parent": null,
"children": [
"YB_Bioyond_Deck"
],
"type": "device",
"class": "bioyond_cell",
"config": {
"deck": {
"data": {
"_resource_child_name": "YB_Bioyond_Deck",
"_resource_type": "unilabos.resources.bioyond.decks:BIOYOND_YB_Deck"
}
},
"protocol_type": []
},
"data": {}
},
{
"id": "YB_Bioyond_Deck",
"name": "YB_Bioyond_Deck",
"children": [],
"parent": "bioyond_cell_workstation",
"type": "deck",
"class": "BIOYOND_YB_Deck",
"position": {
"x": 0,
"y": 0,
"z": 0
},
"config": {
"type": "BIOYOND_YB_Deck",
"setup": true,
"rotation": {
"x": 0,
"y": 0,
"z": 0,
"type": "Rotation"
}
},
"data": {}
},
{
"id": "BatteryStation",
"name": "扣电工作站",
"parent": null,
"children": [
"coin_cell_deck"
],
"type": "device",
"class":"coincellassemblyworkstation_device",
"config": {
"deck": {
"data": {
"_resource_child_name": "YB_YH_Deck",
"_resource_type": "unilabos.devices.workstation.coin_cell_assembly.YB_YH_materials:CoincellDeck"
}
},
"protocol_type": []
},
"position": {
"size": {"height": 1450, "width": 1450, "depth": 2100},
"position": {
"x": -1500,
"y": 0,
"z": 0
}
}
},
{
"id": "YB_YH_Deck",
"name": "YB_YH_Deck",
"children": [],
"parent": "BatteryStation",
"type": "deck",
"class": "CoincellDeck",
"config": {
"type": "CoincellDeck",
"setup": true,
"rotation": {
"x": 0,
"y": 0,
"z": 0,
"type": "Rotation"
}
},
"data": {}
}
],
"links": []
}

View File

@@ -1,6 +1,6 @@
package:
name: ros-humble-unilabos-msgs
version: 0.10.12
version: 0.10.16
source:
path: ../../unilabos_msgs
target_directory: src
@@ -17,7 +17,7 @@ build:
- bash $SRC_DIR/build_ament_cmake.sh
about:
repository: https://github.com/dptech-corp/Uni-Lab-OS
repository: https://github.com/deepmodeling/Uni-Lab-OS
license: BSD-3-Clause
description: "ros-humble-unilabos-msgs is a package that provides message definitions for Uni-Lab-OS."
@@ -25,7 +25,7 @@ requirements:
build:
- ${{ compiler('cxx') }}
- ${{ compiler('c') }}
- python ==3.11.11
- python ==3.11.14
- numpy
- if: build_platform != target_platform
then:
@@ -63,14 +63,14 @@ requirements:
- robostack-staging::ros-humble-rosidl-default-generators
- robostack-staging::ros-humble-std-msgs
- robostack-staging::ros-humble-geometry-msgs
- robostack-staging::ros2-distro-mutex=0.6
- robostack-staging::ros2-distro-mutex=0.7
run:
- robostack-staging::ros-humble-action-msgs
- robostack-staging::ros-humble-ros-workspace
- robostack-staging::ros-humble-rosidl-default-runtime
- robostack-staging::ros-humble-std-msgs
- robostack-staging::ros-humble-geometry-msgs
- robostack-staging::ros2-distro-mutex=0.6
- robostack-staging::ros2-distro-mutex=0.7
- if: osx and x86_64
then:
- __osx >=${{ MACOSX_DEPLOYMENT_TARGET|default('10.14') }}

View File

@@ -1,6 +1,6 @@
package:
name: unilabos
version: "0.10.12"
version: "0.10.16"
source:
path: ../..

View File

@@ -85,7 +85,7 @@ Verification:
-------------
The verify_installation.py script will check:
- Python version (3.11.11)
- Python version (3.11.14)
- ROS2 rclpy installation
- UniLabOS installation and dependencies
@@ -104,7 +104,7 @@ Build Information:
Branch: {branch}
Platform: {platform}
Python: 3.11.11
Python: 3.11.14
Date: {build_date}
Troubleshooting:
@@ -126,7 +126,7 @@ If installation fails:
For more help:
- Documentation: docs/user_guide/installation.md
- Quick Start: QUICK_START_CONDA_PACK.md
- Issues: https://github.com/dptech-corp/Uni-Lab-OS/issues
- Issues: https://github.com/deepmodeling/Uni-Lab-OS/issues
License:
--------
@@ -134,7 +134,7 @@ License:
UniLabOS is licensed under GPL-3.0-only.
See LICENSE file for details.
Repository: https://github.com/dptech-corp/Uni-Lab-OS
Repository: https://github.com/deepmodeling/Uni-Lab-OS
"""
return readme

214
scripts/dev_install.py Normal file
View File

@@ -0,0 +1,214 @@
#!/usr/bin/env python3
"""
Development installation script for UniLabOS.
Auto-detects Chinese locale and uses appropriate mirror.
Usage:
python scripts/dev_install.py
python scripts/dev_install.py --no-mirror # Force no mirror
python scripts/dev_install.py --china # Force China mirror
python scripts/dev_install.py --skip-deps # Skip pip dependencies installation
Flow:
1. pip install -e . (install unilabos in editable mode)
2. Detect Chinese locale
3. Use uv to install pip dependencies from requirements.txt
4. Special packages (like pylabrobot) are handled by environment_check.py at runtime
"""
import locale
import subprocess
import sys
import argparse
from pathlib import Path
# Tsinghua mirror URL
TSINGHUA_MIRROR = "https://mirrors.tuna.tsinghua.edu.cn/pypi/web/simple"
def is_chinese_locale() -> bool:
"""
Detect if system is in Chinese locale.
Same logic as EnvironmentChecker._is_chinese_locale()
"""
try:
lang = locale.getdefaultlocale()[0]
if lang and ("zh" in lang.lower() or "chinese" in lang.lower()):
return True
except Exception:
pass
return False
def run_command(cmd: list, description: str, retry: int = 2) -> bool:
"""Run command with retry support."""
print(f"[INFO] {description}")
print(f"[CMD] {' '.join(cmd)}")
for attempt in range(retry + 1):
try:
result = subprocess.run(cmd, check=True, timeout=600)
print(f"[OK] {description}")
return True
except subprocess.CalledProcessError as e:
if attempt < retry:
print(f"[WARN] Attempt {attempt + 1} failed, retrying...")
else:
print(f"[ERROR] {description} failed: {e}")
return False
except subprocess.TimeoutExpired:
print(f"[ERROR] {description} timed out")
return False
return False
def install_editable(project_root: Path, use_mirror: bool) -> bool:
"""Install unilabos in editable mode using pip."""
cmd = [sys.executable, "-m", "pip", "install", "-e", str(project_root)]
if use_mirror:
cmd.extend(["-i", TSINGHUA_MIRROR])
return run_command(cmd, "Installing unilabos in editable mode")
def install_requirements_uv(requirements_file: Path, use_mirror: bool) -> bool:
"""Install pip dependencies using uv (installed via conda-forge::uv)."""
cmd = ["uv", "pip", "install", "-r", str(requirements_file)]
if use_mirror:
cmd.extend(["-i", TSINGHUA_MIRROR])
return run_command(cmd, "Installing pip dependencies with uv", retry=2)
def install_requirements_pip(requirements_file: Path, use_mirror: bool) -> bool:
"""Fallback: Install pip dependencies using pip."""
cmd = [sys.executable, "-m", "pip", "install", "-r", str(requirements_file)]
if use_mirror:
cmd.extend(["-i", TSINGHUA_MIRROR])
return run_command(cmd, "Installing pip dependencies with pip", retry=2)
def check_uv_available() -> bool:
"""Check if uv is available (installed via conda-forge::uv)."""
try:
subprocess.run(["uv", "--version"], capture_output=True, check=True)
return True
except (subprocess.CalledProcessError, FileNotFoundError):
return False
def main():
parser = argparse.ArgumentParser(description="Development installation script for UniLabOS")
parser.add_argument("--china", action="store_true", help="Force use China mirror (Tsinghua)")
parser.add_argument("--no-mirror", action="store_true", help="Force use default PyPI (no mirror)")
parser.add_argument(
"--skip-deps", action="store_true", help="Skip pip dependencies installation (only install unilabos)"
)
parser.add_argument("--use-pip", action="store_true", help="Use pip instead of uv for dependencies")
args = parser.parse_args()
# Determine project root
script_dir = Path(__file__).parent
project_root = script_dir.parent
requirements_file = project_root / "unilabos" / "utils" / "requirements.txt"
if not (project_root / "setup.py").exists():
print(f"[ERROR] setup.py not found in {project_root}")
sys.exit(1)
print("=" * 60)
print("UniLabOS Development Installation")
print("=" * 60)
print(f"Project root: {project_root}")
print()
# Determine mirror usage based on locale
if args.no_mirror:
use_mirror = False
print("[INFO] Mirror disabled by --no-mirror flag")
elif args.china:
use_mirror = True
print("[INFO] China mirror enabled by --china flag")
else:
use_mirror = is_chinese_locale()
if use_mirror:
print("[INFO] Chinese locale detected, using Tsinghua mirror")
else:
print("[INFO] Non-Chinese locale detected, using default PyPI")
print()
# Step 1: Install unilabos in editable mode
print("[STEP 1] Installing unilabos in editable mode...")
if not install_editable(project_root, use_mirror):
print("[ERROR] Failed to install unilabos")
print()
print("Manual fallback:")
if use_mirror:
print(f" pip install -e {project_root} -i {TSINGHUA_MIRROR}")
else:
print(f" pip install -e {project_root}")
sys.exit(1)
print()
# Step 2: Install pip dependencies
if args.skip_deps:
print("[INFO] Skipping pip dependencies installation (--skip-deps)")
else:
print("[STEP 2] Installing pip dependencies...")
if not requirements_file.exists():
print(f"[WARN] Requirements file not found: {requirements_file}")
print("[INFO] Skipping dependencies installation")
else:
# Try uv first (faster), fallback to pip
if args.use_pip:
print("[INFO] Using pip (--use-pip flag)")
success = install_requirements_pip(requirements_file, use_mirror)
elif check_uv_available():
print("[INFO] Using uv (installed via conda-forge::uv)")
success = install_requirements_uv(requirements_file, use_mirror)
if not success:
print("[WARN] uv failed, falling back to pip...")
success = install_requirements_pip(requirements_file, use_mirror)
else:
print("[WARN] uv not available (should be installed via: mamba install conda-forge::uv)")
print("[INFO] Falling back to pip...")
success = install_requirements_pip(requirements_file, use_mirror)
if not success:
print()
print("[WARN] Failed to install some dependencies automatically.")
print("You can manually install them:")
if use_mirror:
print(f" uv pip install -r {requirements_file} -i {TSINGHUA_MIRROR}")
print(" or:")
print(f" pip install -r {requirements_file} -i {TSINGHUA_MIRROR}")
else:
print(f" uv pip install -r {requirements_file}")
print(" or:")
print(f" pip install -r {requirements_file}")
print()
print("=" * 60)
print("Installation complete!")
print("=" * 60)
print()
print("Note: Some special packages (like pylabrobot) are installed")
print("automatically at runtime by unilabos if needed.")
print()
print("Verify installation:")
print(' python -c "import unilabos; print(unilabos.__version__)"')
print()
print("If you encounter issues, you can manually install dependencies:")
if use_mirror:
print(f" uv pip install -r unilabos/utils/requirements.txt -i {TSINGHUA_MIRROR}")
else:
print(" uv pip install -r unilabos/utils/requirements.txt")
print()
if __name__ == "__main__":
main()

View File

@@ -4,7 +4,7 @@ package_name = 'unilabos'
setup(
name=package_name,
version='0.10.12',
version='0.10.16',
packages=find_packages(),
include_package_data=True,
install_requires=['setuptools'],

View File

@@ -1,72 +0,0 @@
{
"nodes": [
{
"id": "reaction_station_bioyond",
"name": "reaction_station_bioyond",
"parent": null,
"children": [
"Bioyond_Deck"
],
"type": "device",
"class": "reaction_station.bioyond",
"config": {
"config": {
"api_key": "DE9BDDA0",
"api_host": "http://192.168.1.200:44402",
"workflow_mappings": {
"reactor_taken_out": "3a16081e-4788-ca37-eff4-ceed8d7019d1",
"reactor_taken_in": "3a160df6-76b3-0957-9eb0-cb496d5721c6",
"Solid_feeding_vials": "3a160877-87e7-7699-7bc6-ec72b05eb5e6",
"Liquid_feeding_vials(non-titration)": "3a167d99-6158-c6f0-15b5-eb030f7d8e47",
"Liquid_feeding_solvents": "3a160824-0665-01ed-285a-51ef817a9046",
"Liquid_feeding(titration)": "3a16082a-96ac-0449-446a-4ed39f3365b6",
"liquid_feeding_beaker": "3a16087e-124f-8ddb-8ec1-c2dff09ca784",
"Drip_back": "3a162cf9-6aac-565a-ddd7-682ba1796a4a"
},
"material_type_mappings": {
"烧杯": ["YB_1FlaskCarrier", "3a14196b-24f2-ca49-9081-0cab8021bf1a"],
"试剂瓶": ["YB_1BottleCarrier", ""],
"样品板": ["YB_6StockCarrier", "3a14196e-b7a0-a5da-1931-35f3000281e9"],
"分装板": ["YB_6VialCarrier", "3a14196e-5dfe-6e21-0c79-fe2036d052c4"],
"样品瓶": ["YB_Solid_Stock", "3a14196a-cf7d-8aea-48d8-b9662c7dba94"],
"90%分装小瓶": ["YB_Solid_Vial", "3a14196c-cdcf-088d-dc7d-5cf38f0ad9ea"],
"10%分装小瓶": ["YB_Liquid_Vial", "3a14196c-76be-2279-4e22-7310d69aed68"]
}
},
"deck": {
"data": {
"_resource_child_name": "Bioyond_Deck",
"_resource_type": "unilabos.resources.bioyond.decks:BIOYOND_PolymerReactionStation_Deck"
}
},
"protocol_type": []
},
"data": {}
},
{
"id": "Bioyond_Deck",
"name": "Bioyond_Deck",
"children": [
],
"parent": "reaction_station_bioyond",
"type": "deck",
"class": "BIOYOND_PolymerReactionStation_Deck",
"position": {
"x": 0,
"y": 0,
"z": 0
},
"config": {
"type": "BIOYOND_PolymerReactionStation_Deck",
"setup": true,
"rotation": {
"x": 0,
"y": 0,
"z": 0,
"type": "Rotation"
}
},
"data": {}
}
]
}

View File

@@ -1,52 +0,0 @@
[
{
"id": "3a1d377b-299d-d0f2-ced9-48257f60dfad",
"typeName": "加样头(大)",
"code": "0005-00145",
"barCode": "",
"name": "LiDFOB",
"quantity": 9999.0,
"lockQuantity": 0.0,
"unit": "个",
"status": 1,
"isUse": false,
"locations": [
{
"id": "3a19da56-1379-ff7c-1745-07e200b44ce2",
"whid": "3a19da56-1378-613b-29f2-871e1a287aa5",
"whName": "粉末加样头堆栈",
"code": "0005-0001",
"x": 1,
"y": 1,
"z": 1,
"quantity": 0
}
],
"detail": []
},
{
"id": "3a1d377b-6a81-6a7e-147c-f89f6463656d",
"typeName": "液",
"code": "0006-00141",
"barCode": "",
"name": "EMC",
"quantity": 99999.0,
"lockQuantity": 0.0,
"unit": "g",
"status": 1,
"isUse": false,
"locations": [
{
"id": "3a1baa20-a7b1-c665-8b9c-d8099d07d2f6",
"whid": "3a1baa20-a7b0-5c19-8844-5de8924d4e78",
"whName": "4号手套箱内部堆栈",
"code": "0015-0001",
"x": 1,
"y": 1,
"z": 1,
"quantity": 0
}
],
"detail": []
}
]

View File

@@ -1,99 +0,0 @@
{
"typeId": "3a190c8b-3284-af78-d29f-9a69463ad047",
"code": "",
"barCode": "",
"name": "test",
"unit": "",
"parameters": "{}",
"quantity": "",
"details": [
{
"typeId": "3a190c8c-fe8f-bf48-0dc3-97afc7f508eb",
"code": "",
"name": "配液瓶(小)11",
"quantity": "1",
"x": 1,
"y": 1,
"z": 1,
"unit": "",
"parameters": "{}"
},
{
"typeId": "3a190c8c-fe8f-bf48-0dc3-97afc7f508eb",
"code": "",
"name": "配液瓶(小)21",
"quantity": "1",
"x": 2,
"y": 1,
"z": 1,
"unit": "",
"parameters": "{}"
},
{
"typeId": "3a190c8c-fe8f-bf48-0dc3-97afc7f508eb",
"code": "",
"name": "配液瓶(小)12",
"quantity": "1",
"x": 1,
"y": 2,
"z": 1,
"unit": "",
"parameters": "{}"
},
{
"typeId": "3a190c8c-fe8f-bf48-0dc3-97afc7f508eb",
"code": "",
"name": "配液瓶(小)22",
"quantity": "1",
"x": 2,
"y": 2,
"z": 1,
"unit": "",
"parameters": "{}"
},
{
"typeId": "3a190c8c-fe8f-bf48-0dc3-97afc7f508eb",
"code": "",
"name": "配液瓶(小)13",
"quantity": "1",
"x": 1,
"y": 3,
"z": 1,
"unit": "",
"parameters": "{}"
},
{
"typeId": "3a190c8c-fe8f-bf48-0dc3-97afc7f508eb",
"code": "",
"name": "配液瓶(小)23",
"quantity": "1",
"x": 2,
"y": 3,
"z": 1,
"unit": "",
"parameters": "{}"
},
{
"typeId": "3a190c8c-fe8f-bf48-0dc3-97afc7f508eb",
"code": "",
"name": "配液瓶(小)14",
"quantity": "1",
"x": 1,
"y": 4,
"z": 1,
"unit": "",
"parameters": "{}"
},
{
"typeId": "3a190c8c-fe8f-bf48-0dc3-97afc7f508eb",
"code": "",
"name": "配液瓶(小)24",
"quantity": "1",
"x": 2,
"y": 4,
"z": 1,
"unit": "",
"parameters": "{}"
}
]
}

View File

@@ -1,148 +0,0 @@
[
{
"id": "3a1d4c14-a9fb-d7dc-9e96-7a3ad6e50219",
"typeName": "配液瓶(小)板",
"code": "0001-00093",
"barCode": "",
"name": "test",
"quantity": 2.0,
"lockQuantity": 0.0,
"unit": "块",
"status": 1,
"isUse": false,
"locations": [
{
"id": "3a19deae-2c7a-36f5-5e41-02c5b66feaea",
"whid": "3a19deae-2c79-05a3-9c76-8e6760424841",
"whName": "手动堆栈",
"code": "1",
"x": 1,
"y": 1,
"z": 1,
"quantity": 0
}
],
"detail": [
{
"id": "3a1d4c14-a9fc-1daa-71fa-146cb1ccb930",
"detailMaterialId": "3a1d4c14-a9fc-4f38-4c48-68486c391c42",
"code": "0001-00093 - 05",
"name": "配液瓶(小)",
"quantity": "1",
"lockQuantity": "0",
"unit": "个",
"x": 1,
"y": 3,
"z": 1,
"associateId": null,
"typeName": "配液瓶(小)",
"typeId": "3a190c8c-fe8f-bf48-0dc3-97afc7f508eb"
},
{
"id": "3a1d4c14-a9fc-3659-ea61-cd587da9e131",
"detailMaterialId": "3a1d4c14-a9fc-018f-93e5-c49343d37758",
"code": "0001-00093 - 08",
"name": "配液瓶(小)",
"quantity": "1",
"lockQuantity": "0",
"unit": "个",
"x": 2,
"y": 4,
"z": 1,
"associateId": null,
"typeName": "配液瓶(小)",
"typeId": "3a190c8c-fe8f-bf48-0dc3-97afc7f508eb"
},
{
"id": "3a1d4c14-a9fc-3f94-de83-979d2646e313",
"detailMaterialId": "3a1d4c14-a9fc-9987-c0ef-4b7cbad49e6b",
"code": "0001-00093 - 01",
"name": "配液瓶(小)",
"quantity": "1",
"lockQuantity": "0",
"unit": "个",
"x": 1,
"y": 1,
"z": 1,
"associateId": null,
"typeName": "配液瓶(小)",
"typeId": "3a190c8c-fe8f-bf48-0dc3-97afc7f508eb"
},
{
"id": "3a1d4c14-a9fc-8c35-6b25-913b11dbaf4e",
"detailMaterialId": "3a1d4c14-a9fc-9a83-865b-0c26ea5e8cc4",
"code": "0001-00093 - 03",
"name": "配液瓶(小)",
"quantity": "1",
"lockQuantity": "0",
"unit": "个",
"x": 1,
"y": 2,
"z": 1,
"associateId": null,
"typeName": "配液瓶(小)",
"typeId": "3a190c8c-fe8f-bf48-0dc3-97afc7f508eb"
},
{
"id": "3a1d4c14-a9fc-b41f-e968-64953bfddccd",
"detailMaterialId": "3a1d4c14-a9fc-daf7-9d64-e5ec8d3ae0e2",
"code": "0001-00093 - 07",
"name": "配液瓶(小)",
"quantity": "1",
"lockQuantity": "0",
"unit": "个",
"x": 1,
"y": 4,
"z": 1,
"associateId": null,
"typeName": "配液瓶(小)",
"typeId": "3a190c8c-fe8f-bf48-0dc3-97afc7f508eb"
},
{
"id": "3a1d4c14-a9fc-c20f-c26e-b1bb2cdc3bca",
"detailMaterialId": "3a1d4c14-a9fc-673b-ac83-aaaf71287f1f",
"code": "0001-00093 - 06",
"name": "配液瓶(小)",
"quantity": "1",
"lockQuantity": "0",
"unit": "个",
"x": 2,
"y": 3,
"z": 1,
"associateId": null,
"typeName": "配液瓶(小)",
"typeId": "3a190c8c-fe8f-bf48-0dc3-97afc7f508eb"
},
{
"id": "3a1d4c14-a9fc-cf21-059c-fde361d82b6f",
"detailMaterialId": "3a1d4c14-a9fc-25b1-e736-6b0d8dac0fae",
"code": "0001-00093 - 02",
"name": "配液瓶(小)",
"quantity": "1",
"lockQuantity": "0",
"unit": "个",
"x": 2,
"y": 1,
"z": 1,
"associateId": null,
"typeName": "配液瓶(小)",
"typeId": "3a190c8c-fe8f-bf48-0dc3-97afc7f508eb"
},
{
"id": "3a1d4c14-a9fc-d732-2b93-9b2bd2bf581b",
"detailMaterialId": "3a1d4c14-a9fc-7f5d-b6b6-8bcb2e15f320",
"code": "0001-00093 - 04",
"name": "配液瓶(小)",
"quantity": "1",
"lockQuantity": "0",
"unit": "个",
"x": 2,
"y": 2,
"z": 1,
"associateId": null,
"typeName": "配液瓶(小)",
"typeId": "3a190c8c-fe8f-bf48-0dc3-97afc7f508eb"
}
]
}
]

7
tests/__init__.py Normal file
View File

@@ -0,0 +1,7 @@
"""
测试包根目录。
让 `tests.*` 模块可以被正常 import例如给 `unilabos` 下的测试入口使用)。
"""

View File

@@ -0,0 +1 @@

View File

@@ -0,0 +1,5 @@
"""
液体处理设备相关测试。
"""

View File

@@ -0,0 +1,505 @@
import asyncio
from dataclasses import dataclass
from typing import Any, Iterable, List, Optional, Sequence, Tuple
import pytest
from unilabos.devices.liquid_handling.liquid_handler_abstract import LiquidHandlerAbstract
@dataclass(frozen=True)
class DummyContainer:
name: str
def __repr__(self) -> str: # pragma: no cover
return f"DummyContainer({self.name})"
@dataclass(frozen=True)
class DummyTipSpot:
name: str
def __repr__(self) -> str: # pragma: no cover
return f"DummyTipSpot({self.name})"
def make_tip_iter(n: int = 256) -> Iterable[List[DummyTipSpot]]:
"""Yield lists so code can safely call `tip.extend(next(self.current_tip))`."""
for i in range(n):
yield [DummyTipSpot(f"tip_{i}")]
class FakeLiquidHandler(LiquidHandlerAbstract):
"""不初始化真实 backend/deck仅用来记录 transfer_liquid 内部调用序列。"""
def __init__(self, channel_num: int = 8):
# 不调用 super().__init__避免真实硬件/后端依赖
self.channel_num = channel_num
self.support_touch_tip = True
self.current_tip = iter(make_tip_iter())
self.calls: List[Tuple[str, Any]] = []
async def pick_up_tips(self, tip_spots, use_channels=None, offsets=None, **backend_kwargs):
self.calls.append(("pick_up_tips", {"tips": list(tip_spots), "use_channels": use_channels}))
async def aspirate(
self,
resources: Sequence[Any],
vols: List[float],
use_channels: Optional[List[int]] = None,
flow_rates: Optional[List[Optional[float]]] = None,
offsets: Any = None,
liquid_height: Any = None,
blow_out_air_volume: Any = None,
spread: str = "wide",
**backend_kwargs,
):
self.calls.append(
(
"aspirate",
{
"resources": list(resources),
"vols": list(vols),
"use_channels": list(use_channels) if use_channels is not None else None,
"flow_rates": list(flow_rates) if flow_rates is not None else None,
"offsets": list(offsets) if offsets is not None else None,
"liquid_height": list(liquid_height) if liquid_height is not None else None,
"blow_out_air_volume": list(blow_out_air_volume) if blow_out_air_volume is not None else None,
},
)
)
async def dispense(
self,
resources: Sequence[Any],
vols: List[float],
use_channels: Optional[List[int]] = None,
flow_rates: Optional[List[Optional[float]]] = None,
offsets: Any = None,
liquid_height: Any = None,
blow_out_air_volume: Any = None,
spread: str = "wide",
**backend_kwargs,
):
self.calls.append(
(
"dispense",
{
"resources": list(resources),
"vols": list(vols),
"use_channels": list(use_channels) if use_channels is not None else None,
"flow_rates": list(flow_rates) if flow_rates is not None else None,
"offsets": list(offsets) if offsets is not None else None,
"liquid_height": list(liquid_height) if liquid_height is not None else None,
"blow_out_air_volume": list(blow_out_air_volume) if blow_out_air_volume is not None else None,
},
)
)
async def discard_tips(self, use_channels=None, *args, **kwargs):
# 有的分支是 discard_tips(use_channels=[0]),有的分支是 discard_tips([0..7])(位置参数)
self.calls.append(("discard_tips", {"use_channels": list(use_channels) if use_channels is not None else None}))
async def custom_delay(self, seconds=0, msg=None):
self.calls.append(("custom_delay", {"seconds": seconds, "msg": msg}))
async def touch_tip(self, targets):
# 原实现会访问 targets.get_size_x() 等;测试里只记录调用
self.calls.append(("touch_tip", {"targets": targets}))
async def mix(self, targets, mix_time=None, mix_vol=None, height_to_bottom=None, offsets=None, mix_rate=None, none_keys=None):
self.calls.append(
(
"mix",
{
"targets": targets,
"mix_time": mix_time,
"mix_vol": mix_vol,
},
)
)
def run(coro):
return asyncio.run(coro)
def test_one_to_one_single_channel_basic_calls():
lh = FakeLiquidHandler(channel_num=1)
lh.current_tip = iter(make_tip_iter(64))
sources = [DummyContainer(f"S{i}") for i in range(3)]
targets = [DummyContainer(f"T{i}") for i in range(3)]
run(
lh.transfer_liquid(
sources=sources,
targets=targets,
tip_racks=[],
use_channels=[0],
asp_vols=[1, 2, 3],
dis_vols=[4, 5, 6],
mix_times=None, # 应该仍能执行(不 mix
)
)
assert [c[0] for c in lh.calls].count("pick_up_tips") == 3
assert [c[0] for c in lh.calls].count("aspirate") == 3
assert [c[0] for c in lh.calls].count("dispense") == 3
assert [c[0] for c in lh.calls].count("discard_tips") == 3
# 每次 aspirate/dispense 都是单孔列表
aspirates = [payload for name, payload in lh.calls if name == "aspirate"]
assert aspirates[0]["resources"] == [sources[0]]
assert aspirates[0]["vols"] == [1.0]
dispenses = [payload for name, payload in lh.calls if name == "dispense"]
assert dispenses[2]["resources"] == [targets[2]]
assert dispenses[2]["vols"] == [6.0]
def test_one_to_one_single_channel_before_stage_mixes_prior_to_aspirate():
lh = FakeLiquidHandler(channel_num=1)
lh.current_tip = iter(make_tip_iter(16))
source = DummyContainer("S0")
target = DummyContainer("T0")
run(
lh.transfer_liquid(
sources=[source],
targets=[target],
tip_racks=[],
use_channels=[0],
asp_vols=[5],
dis_vols=[5],
mix_stage="before",
mix_times=1,
mix_vol=3,
)
)
names = [name for name, _ in lh.calls]
assert names.count("mix") == 1
assert names.index("mix") < names.index("aspirate")
def test_one_to_one_eight_channel_groups_by_8():
lh = FakeLiquidHandler(channel_num=8)
lh.current_tip = iter(make_tip_iter(256))
sources = [DummyContainer(f"S{i}") for i in range(16)]
targets = [DummyContainer(f"T{i}") for i in range(16)]
asp_vols = list(range(1, 17))
dis_vols = list(range(101, 117))
run(
lh.transfer_liquid(
sources=sources,
targets=targets,
tip_racks=[],
use_channels=list(range(8)),
asp_vols=asp_vols,
dis_vols=dis_vols,
mix_times=0, # 触发逻辑但不 mix
)
)
# 16 个任务 -> 2 组,每组 8 通道一起做
assert [c[0] for c in lh.calls].count("pick_up_tips") == 2
aspirates = [payload for name, payload in lh.calls if name == "aspirate"]
dispenses = [payload for name, payload in lh.calls if name == "dispense"]
assert len(aspirates) == 2
assert len(dispenses) == 2
assert aspirates[0]["resources"] == sources[0:8]
assert aspirates[0]["vols"] == [float(v) for v in asp_vols[0:8]]
assert dispenses[1]["resources"] == targets[8:16]
assert dispenses[1]["vols"] == [float(v) for v in dis_vols[8:16]]
def test_one_to_one_eight_channel_requires_multiple_of_8_targets():
lh = FakeLiquidHandler(channel_num=8)
lh.current_tip = iter(make_tip_iter(64))
sources = [DummyContainer(f"S{i}") for i in range(9)]
targets = [DummyContainer(f"T{i}") for i in range(9)]
with pytest.raises(ValueError, match="multiple of 8"):
run(
lh.transfer_liquid(
sources=sources,
targets=targets,
tip_racks=[],
use_channels=list(range(8)),
asp_vols=[1] * 9,
dis_vols=[1] * 9,
mix_times=0,
)
)
def test_one_to_one_eight_channel_parameter_lists_are_chunked_per_8():
lh = FakeLiquidHandler(channel_num=8)
lh.current_tip = iter(make_tip_iter(512))
sources = [DummyContainer(f"S{i}") for i in range(16)]
targets = [DummyContainer(f"T{i}") for i in range(16)]
asp_vols = [i + 1 for i in range(16)]
dis_vols = [200 + i for i in range(16)]
asp_flow_rates = [0.1 * (i + 1) for i in range(16)]
dis_flow_rates = [0.2 * (i + 1) for i in range(16)]
offsets = [f"offset_{i}" for i in range(16)]
liquid_heights = [i * 0.5 for i in range(16)]
blow_out_air_volume = [i + 0.05 for i in range(16)]
run(
lh.transfer_liquid(
sources=sources,
targets=targets,
tip_racks=[],
use_channels=list(range(8)),
asp_vols=asp_vols,
dis_vols=dis_vols,
asp_flow_rates=asp_flow_rates,
dis_flow_rates=dis_flow_rates,
offsets=offsets,
liquid_height=liquid_heights,
blow_out_air_volume=blow_out_air_volume,
mix_times=0,
)
)
aspirates = [payload for name, payload in lh.calls if name == "aspirate"]
dispenses = [payload for name, payload in lh.calls if name == "dispense"]
assert len(aspirates) == len(dispenses) == 2
for batch_idx in range(2):
start = batch_idx * 8
end = start + 8
asp_call = aspirates[batch_idx]
dis_call = dispenses[batch_idx]
assert asp_call["resources"] == sources[start:end]
assert asp_call["flow_rates"] == asp_flow_rates[start:end]
assert asp_call["offsets"] == offsets[start:end]
assert asp_call["liquid_height"] == liquid_heights[start:end]
assert asp_call["blow_out_air_volume"] == blow_out_air_volume[start:end]
assert dis_call["flow_rates"] == dis_flow_rates[start:end]
assert dis_call["offsets"] == offsets[start:end]
assert dis_call["liquid_height"] == liquid_heights[start:end]
assert dis_call["blow_out_air_volume"] == blow_out_air_volume[start:end]
def test_one_to_one_eight_channel_handles_32_tasks_four_batches():
lh = FakeLiquidHandler(channel_num=8)
lh.current_tip = iter(make_tip_iter(1024))
sources = [DummyContainer(f"S{i}") for i in range(32)]
targets = [DummyContainer(f"T{i}") for i in range(32)]
asp_vols = [i + 1 for i in range(32)]
dis_vols = [300 + i for i in range(32)]
run(
lh.transfer_liquid(
sources=sources,
targets=targets,
tip_racks=[],
use_channels=list(range(8)),
asp_vols=asp_vols,
dis_vols=dis_vols,
mix_times=0,
)
)
pick_calls = [name for name, _ in lh.calls if name == "pick_up_tips"]
aspirates = [payload for name, payload in lh.calls if name == "aspirate"]
dispenses = [payload for name, payload in lh.calls if name == "dispense"]
assert len(pick_calls) == 4
assert len(aspirates) == len(dispenses) == 4
assert aspirates[0]["resources"] == sources[0:8]
assert aspirates[-1]["resources"] == sources[24:32]
assert dispenses[0]["resources"] == targets[0:8]
assert dispenses[-1]["resources"] == targets[24:32]
def test_one_to_many_single_channel_aspirates_total_when_asp_vol_too_small():
lh = FakeLiquidHandler(channel_num=1)
lh.current_tip = iter(make_tip_iter(64))
source = DummyContainer("SRC")
targets = [DummyContainer(f"T{i}") for i in range(3)]
dis_vols = [10, 20, 30] # sum=60
run(
lh.transfer_liquid(
sources=[source],
targets=targets,
tip_racks=[],
use_channels=[0],
asp_vols=10, # 小于 sum(dis_vols) -> 应吸 60
dis_vols=dis_vols,
mix_times=0,
)
)
aspirates = [payload for name, payload in lh.calls if name == "aspirate"]
assert len(aspirates) == 1
assert aspirates[0]["resources"] == [source]
assert aspirates[0]["vols"] == [60.0]
assert aspirates[0]["use_channels"] == [0]
dispenses = [payload for name, payload in lh.calls if name == "dispense"]
assert [d["vols"][0] for d in dispenses] == [10.0, 20.0, 30.0]
def test_one_to_many_eight_channel_basic():
lh = FakeLiquidHandler(channel_num=8)
lh.current_tip = iter(make_tip_iter(128))
source = DummyContainer("SRC")
targets = [DummyContainer(f"T{i}") for i in range(8)]
dis_vols = [i + 1 for i in range(8)]
run(
lh.transfer_liquid(
sources=[source],
targets=targets,
tip_racks=[],
use_channels=list(range(8)),
asp_vols=999, # one-to-many 8ch 会按 dis_vols 吸(每通道各自)
dis_vols=dis_vols,
mix_times=0,
)
)
aspirates = [payload for name, payload in lh.calls if name == "aspirate"]
assert aspirates[0]["resources"] == [source] * 8
assert aspirates[0]["vols"] == [float(v) for v in dis_vols]
dispenses = [payload for name, payload in lh.calls if name == "dispense"]
assert dispenses[0]["resources"] == targets
assert dispenses[0]["vols"] == [float(v) for v in dis_vols]
def test_many_to_one_single_channel_standard_dispense_equals_asp_by_default():
lh = FakeLiquidHandler(channel_num=1)
lh.current_tip = iter(make_tip_iter(128))
sources = [DummyContainer(f"S{i}") for i in range(3)]
target = DummyContainer("T")
asp_vols = [5, 6, 7]
run(
lh.transfer_liquid(
sources=sources,
targets=[target],
tip_racks=[],
use_channels=[0],
asp_vols=asp_vols,
dis_vols=1, # many-to-one 允许标量;非比例模式下实际每次分液=对应 asp_vol
mix_times=0,
)
)
dispenses = [payload for name, payload in lh.calls if name == "dispense"]
assert [d["vols"][0] for d in dispenses] == [float(v) for v in asp_vols]
assert all(d["resources"] == [target] for d in dispenses)
def test_many_to_one_single_channel_before_stage_mixes_target_once():
lh = FakeLiquidHandler(channel_num=1)
lh.current_tip = iter(make_tip_iter(128))
sources = [DummyContainer("S0"), DummyContainer("S1")]
target = DummyContainer("T")
run(
lh.transfer_liquid(
sources=sources,
targets=[target],
tip_racks=[],
use_channels=[0],
asp_vols=[5, 6],
dis_vols=1,
mix_stage="before",
mix_times=2,
mix_vol=4,
)
)
names = [name for name, _ in lh.calls]
assert names[0] == "mix"
assert names.count("mix") == 1
def test_many_to_one_single_channel_proportional_mixing_uses_dis_vols_per_source():
lh = FakeLiquidHandler(channel_num=1)
lh.current_tip = iter(make_tip_iter(128))
sources = [DummyContainer(f"S{i}") for i in range(3)]
target = DummyContainer("T")
asp_vols = [5, 6, 7]
dis_vols = [1, 2, 3]
run(
lh.transfer_liquid(
sources=sources,
targets=[target],
tip_racks=[],
use_channels=[0],
asp_vols=asp_vols,
dis_vols=dis_vols, # 比例模式
mix_times=0,
)
)
dispenses = [payload for name, payload in lh.calls if name == "dispense"]
assert [d["vols"][0] for d in dispenses] == [float(v) for v in dis_vols]
def test_many_to_one_eight_channel_basic():
lh = FakeLiquidHandler(channel_num=8)
lh.current_tip = iter(make_tip_iter(256))
sources = [DummyContainer(f"S{i}") for i in range(8)]
target = DummyContainer("T")
asp_vols = [10 + i for i in range(8)]
run(
lh.transfer_liquid(
sources=sources,
targets=[target],
tip_racks=[],
use_channels=list(range(8)),
asp_vols=asp_vols,
dis_vols=999, # 非比例模式下每通道分液=对应 asp_vol
mix_times=0,
)
)
aspirates = [payload for name, payload in lh.calls if name == "aspirate"]
dispenses = [payload for name, payload in lh.calls if name == "dispense"]
assert aspirates[0]["resources"] == sources
assert aspirates[0]["vols"] == [float(v) for v in asp_vols]
assert dispenses[0]["resources"] == [target] * 8
assert dispenses[0]["vols"] == [float(v) for v in asp_vols]
def test_transfer_liquid_mode_detection_unsupported_shape_raises():
lh = FakeLiquidHandler(channel_num=8)
lh.current_tip = iter(make_tip_iter(64))
sources = [DummyContainer("S0"), DummyContainer("S1")]
targets = [DummyContainer("T0"), DummyContainer("T1"), DummyContainer("T2")]
with pytest.raises(ValueError, match="Unsupported transfer mode"):
run(
lh.transfer_liquid(
sources=sources,
targets=targets,
tip_racks=[],
use_channels=[0],
asp_vols=[1, 1],
dis_vols=[1, 1, 1],
mix_times=0,
)
)

View File

@@ -1,7 +1,7 @@
import pytest
from unilabos.resources.bioyond.bottle_carriers import BIOYOND_Electrolyte_6VialCarrier, BIOYOND_Electrolyte_1BottleCarrier
from unilabos.resources.bioyond.bottles import YB_Solid_Vial, YB_Solution_Beaker, YB_Reagent_Bottle
from unilabos.resources.bioyond.bottles import BIOYOND_PolymerStation_Solid_Vial, BIOYOND_PolymerStation_Solution_Beaker, BIOYOND_PolymerStation_Reagent_Bottle
def test_bottle_carrier() -> "BottleCarrier":
@@ -16,9 +16,9 @@ def test_bottle_carrier() -> "BottleCarrier":
print(f"1烧杯载架: {beaker_carrier.name}, 位置数: {len(beaker_carrier.sites)}")
# 创建瓶子和烧杯
powder_bottle = YB_Solid_Vial("powder_bottle_01")
solution_beaker = YB_Solution_Beaker("solution_beaker_01")
reagent_bottle = YB_Reagent_Bottle("reagent_bottle_01")
powder_bottle = BIOYOND_PolymerStation_Solid_Vial("powder_bottle_01")
solution_beaker = BIOYOND_PolymerStation_Solution_Beaker("solution_beaker_01")
reagent_bottle = BIOYOND_PolymerStation_Reagent_Bottle("reagent_bottle_01")
print(f"\n创建的物料:")
print(f"粉末瓶: {powder_bottle.name} - {powder_bottle.diameter}mm x {powder_bottle.height}mm, {powder_bottle.max_volume}μL")

View File

@@ -12,13 +12,13 @@ lab_registry.setup()
type_mapping = {
"烧杯": ("YB_1FlaskCarrier", "3a14196b-24f2-ca49-9081-0cab8021bf1a"),
"试剂瓶": ("YB_1BottleCarrier", ""),
"样品板": ("YB_6StockCarrier", "3a14196e-b7a0-a5da-1931-35f3000281e9"),
"分装板": ("YB_6VialCarrier", "3a14196e-5dfe-6e21-0c79-fe2036d052c4"),
"样品瓶": ("YB_Solid_Stock", "3a14196a-cf7d-8aea-48d8-b9662c7dba94"),
"90%分装小瓶": ("YB_Solid_Vial", "3a14196c-cdcf-088d-dc7d-5cf38f0ad9ea"),
"10%分装小瓶": ("YB_Liquid_Vial", "3a14196c-76be-2279-4e22-7310d69aed68"),
"烧杯": ("BIOYOND_PolymerStation_1FlaskCarrier", "3a14196b-24f2-ca49-9081-0cab8021bf1a"),
"试剂瓶": ("BIOYOND_PolymerStation_1BottleCarrier", ""),
"样品板": ("BIOYOND_PolymerStation_6StockCarrier", "3a14196e-b7a0-a5da-1931-35f3000281e9"),
"分装板": ("BIOYOND_PolymerStation_6VialCarrier", "3a14196e-5dfe-6e21-0c79-fe2036d052c4"),
"样品瓶": ("BIOYOND_PolymerStation_Solid_Stock", "3a14196a-cf7d-8aea-48d8-b9662c7dba94"),
"90%分装小瓶": ("BIOYOND_PolymerStation_Solid_Vial", "3a14196c-cdcf-088d-dc7d-5cf38f0ad9ea"),
"10%分装小瓶": ("BIOYOND_PolymerStation_Liquid_Vial", "3a14196c-76be-2279-4e22-7310d69aed68"),
}

View File

@@ -1,24 +1,24 @@
from ast import If
import pytest
import json
import os
from pylabrobot.resources import Resource as ResourcePLR
from unilabos.resources.graphio import resource_bioyond_to_plr
from unilabos.ros.nodes.resource_tracker import ResourceTreeSet
from unilabos.resources.resource_tracker import ResourceTreeSet
from unilabos.registry.registry import lab_registry
from unilabos.resources.bioyond.decks import BIOYOND_PolymerReactionStation_Deck
from unilabos.resources.bioyond.decks import YB_Deck
lab_registry.setup()
type_mapping = {
"加样头(大)": ("YB_jia_yang_tou_da", "3a190ca0-b2f6-9aeb-8067-547e72c11469"),
"": ("YB_1BottleCarrier", "3a190ca1-2add-2b23-f8e1-bbd348b7f790"),
"配液瓶(小)": ("YB_peiyepingxiaoban", "3a190c8b-3284-af78-d29f-9a69463ad047"),
"配液瓶(小)": ("YB_pei_ye_xiao_Bottler", "3a190c8c-fe8f-bf48-0dc3-97afc7f508eb"),
"烧杯": ("BIOYOND_PolymerStation_1FlaskCarrier", "3a14196b-24f2-ca49-9081-0cab8021bf1a"),
"试剂瓶": ("BIOYOND_PolymerStation_1BottleCarrier", ""),
"样品": ("BIOYOND_PolymerStation_6StockCarrier", "3a14196e-b7a0-a5da-1931-35f3000281e9"),
"分装板": ("BIOYOND_PolymerStation_6VialCarrier", "3a14196e-5dfe-6e21-0c79-fe2036d052c4"),
"样品瓶": ("BIOYOND_PolymerStation_Solid_Stock", "3a14196a-cf7d-8aea-48d8-b9662c7dba94"),
"90%分装小瓶": ("BIOYOND_PolymerStation_Solid_Vial", "3a14196c-cdcf-088d-dc7d-5cf38f0ad9ea"),
"10%分装小瓶": ("BIOYOND_PolymerStation_Liquid_Vial", "3a14196c-76be-2279-4e22-7310d69aed68"),
}
@@ -56,20 +56,12 @@ def bioyond_materials_liquidhandling_2() -> list[dict]:
"bioyond_materials_reaction",
"bioyond_materials_liquidhandling_1",
])
def test_resourcetreeset_from_plr() -> list[dict]:
# 直接加载 bioyond_materials_reaction.json 文件
current_dir = os.path.dirname(os.path.abspath(__file__))
json_path = os.path.join(current_dir, "test.json")
with open(json_path, "r", encoding="utf-8") as f:
materials = json.load(f)
deck = YB_Deck("test_deck")
def test_resourcetreeset_from_plr(materials_fixture, request) -> list[dict]:
materials = request.getfixturevalue(materials_fixture)
deck = BIOYOND_PolymerReactionStation_Deck("test_deck")
output = resource_bioyond_to_plr(materials, type_mapping=type_mapping, deck=deck)
print(output)
# print(deck.summary())
print(deck.summary())
r = ResourceTreeSet.from_plr_resources([deck])
print(r.dump())
# json.dump(deck.serialize(), open("test.json", "w", encoding="utf-8"), indent=4)
if __name__ == "__main__":
test_resourcetreeset_from_plr()

View File

@@ -11,10 +11,10 @@ import os
# 添加项目根目录到路径
sys.path.append(os.path.dirname(os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))))
# 导入测试模块
from test.ros.msgs.test_basic import TestBasicFunctionality
from test.ros.msgs.test_conversion import TestBasicConversion, TestMappingConversion
from test.ros.msgs.test_mapping import TestTypeMapping, TestFieldMapping
# 导入测试模块(统一从 tests 包获取)
from tests.ros.msgs.test_basic import TestBasicFunctionality
from tests.ros.msgs.test_conversion import TestBasicConversion, TestMappingConversion
from tests.ros.msgs.test_mapping import TestTypeMapping, TestFieldMapping
def run_tests():

View File

Before

Width:  |  Height:  |  Size: 148 KiB

After

Width:  |  Height:  |  Size: 148 KiB

View File

Before

Width:  |  Height:  |  Size: 140 KiB

After

Width:  |  Height:  |  Size: 140 KiB

View File

Before

Width:  |  Height:  |  Size: 117 KiB

After

Width:  |  Height:  |  Size: 117 KiB

View File

@@ -1 +1 @@
__version__ = "0.10.12"
__version__ = "0.10.16"

6
unilabos/__main__.py Normal file
View File

@@ -0,0 +1,6 @@
"""Entry point for `python -m unilabos`."""
from unilabos.app.main import main
if __name__ == "__main__":
main()

View File

@@ -1,6 +1,6 @@
import threading
from unilabos.ros.nodes.resource_tracker import ResourceTreeSet
from unilabos.resources.resource_tracker import ResourceTreeSet
from unilabos.utils import logger

View File

@@ -7,7 +7,6 @@ import sys
import threading
import time
from typing import Dict, Any, List
import networkx as nx
import yaml
@@ -17,9 +16,14 @@ unilabos_dir = os.path.dirname(os.path.dirname(current_dir))
if unilabos_dir not in sys.path:
sys.path.append(unilabos_dir)
from unilabos.app.utils import cleanup_for_restart
from unilabos.utils.banner_print import print_status, print_unilab_banner
from unilabos.config.config import load_config, BasicConfig, HTTPConfig
# Global restart flags (used by ws_client and web/server)
_restart_requested: bool = False
_restart_reason: str = ""
def load_config_from_file(config_path):
if config_path is None:
@@ -156,6 +160,17 @@ def parse_args():
default=False,
help="Complete registry information",
)
parser.add_argument(
"--check_mode",
action="store_true",
default=False,
help="Run in check mode for CI: validates registry imports and ensures no file changes",
)
parser.add_argument(
"--no_update_feedback",
action="store_true",
help="Disable sending update feedback to server",
)
# workflow upload subcommand
workflow_parser = subparsers.add_parser(
"workflow_upload",
@@ -201,7 +216,10 @@ def main():
args_dict = vars(args)
# 环境检查 - 检查并自动安装必需的包 (可选)
if not args_dict.get("skip_env_check", False):
skip_env_check = args_dict.get("skip_env_check", False)
check_mode = args_dict.get("check_mode", False)
if not skip_env_check:
from unilabos.utils.environment_check import check_environment
if not check_environment(auto_install=True):
@@ -212,7 +230,21 @@ def main():
# 加载配置文件优先加载config然后从env读取
config_path = args_dict.get("config")
if os.getcwd().endswith("unilabos_data"):
if check_mode:
args_dict["working_dir"] = os.path.abspath(os.getcwd())
# 当 skip_env_check 时,默认使用当前目录作为 working_dir
if skip_env_check and not args_dict.get("working_dir") and not config_path:
working_dir = os.path.abspath(os.getcwd())
print_status(f"跳过环境检查模式:使用当前目录作为工作目录 {working_dir}", "info")
# 检查当前目录是否有 local_config.py
local_config_in_cwd = os.path.join(working_dir, "local_config.py")
if os.path.exists(local_config_in_cwd):
config_path = local_config_in_cwd
print_status(f"发现本地配置文件: {config_path}", "info")
else:
print_status(f"未指定config路径可通过 --config 传入 local_config.py 文件路径", "info")
elif os.getcwd().endswith("unilabos_data"):
working_dir = os.path.abspath(os.getcwd())
else:
working_dir = os.path.abspath(os.path.join(os.getcwd(), "unilabos_data"))
@@ -231,7 +263,7 @@ def main():
working_dir = os.path.dirname(config_path)
elif os.path.exists(working_dir) and os.path.exists(os.path.join(working_dir, "local_config.py")):
config_path = os.path.join(working_dir, "local_config.py")
elif not config_path and (
elif not skip_env_check and not config_path and (
not os.path.exists(working_dir) or not os.path.exists(os.path.join(working_dir, "local_config.py"))
):
print_status(f"未指定config路径可通过 --config 传入 local_config.py 文件路径", "info")
@@ -245,9 +277,11 @@ def main():
print_status(f"已创建 local_config.py 路径: {config_path}", "info")
else:
os._exit(1)
# 加载配置文件
# 加载配置文件 (check_mode 跳过)
print_status(f"当前工作目录为 {working_dir}", "info")
load_config_from_file(config_path)
if not check_mode:
load_config_from_file(config_path)
# 根据配置重新设置日志级别
from unilabos.utils.log import configure_logger, logger
@@ -297,11 +331,13 @@ def main():
BasicConfig.is_host_mode = not args_dict.get("is_slave", False)
BasicConfig.slave_no_host = args_dict.get("slave_no_host", False)
BasicConfig.upload_registry = args_dict.get("upload_registry", False)
BasicConfig.no_update_feedback = args_dict.get("no_update_feedback", False)
BasicConfig.communication_protocol = "websocket"
machine_name = os.popen("hostname").read().strip()
machine_name = "".join([c if c.isalnum() or c == "_" else "_" for c in machine_name])
BasicConfig.machine_name = machine_name
BasicConfig.vis_2d_enable = args_dict["2d_vis"]
BasicConfig.check_mode = check_mode
from unilabos.resources.graphio import (
read_node_link_json,
@@ -315,15 +351,19 @@ def main():
from unilabos.app.web import start_server
from unilabos.app.register import register_devices_and_resources
from unilabos.resources.graphio import modify_to_backend_format
from unilabos.ros.nodes.resource_tracker import ResourceTreeSet, ResourceDict
from unilabos.resources.resource_tracker import ResourceTreeSet, ResourceDict
# 显示启动横幅
print_unilab_banner(args_dict)
# 注册表
lab_registry = build_registry(
args_dict["registry_path"], args_dict.get("complete_registry", False), BasicConfig.upload_registry
)
# 注册表 - check_mode 时强制启用 complete_registry
complete_registry = args_dict.get("complete_registry", False) or check_mode
lab_registry = build_registry(args_dict["registry_path"], complete_registry, BasicConfig.upload_registry)
# Check mode: complete_registry 完成后直接退出git diff 检测由 CI workflow 执行
if check_mode:
print_status("Check mode: complete_registry 完成,退出", "info")
os._exit(0)
if BasicConfig.upload_registry:
# 设备注册到服务端 - 需要 ak 和 sk
@@ -388,6 +428,10 @@ def main():
for ind, i in enumerate(resource_edge_info[::-1]):
source_node: ResourceDict = nodes[i["source"]]
target_node: ResourceDict = nodes[i["target"]]
if "sourceHandle" not in source_node:
continue
if "targetHandle" not in target_node:
continue
source_handle = i["sourceHandle"]
target_handle = i["targetHandle"]
source_handler_keys = [
@@ -414,7 +458,7 @@ def main():
# 如果从远端获取了物料信息,则与本地物料进行同步
if request_startup_json and "nodes" in request_startup_json:
print_status("开始同步远端物料到本地...", "info")
remote_tree_set = ResourceTreeSet.from_raw_list(request_startup_json["nodes"])
remote_tree_set = ResourceTreeSet.from_raw_dict_list(request_startup_json["nodes"])
resource_tree_set.merge_remote_resources(remote_tree_set)
print_status("远端物料同步完成", "info")
@@ -493,13 +537,19 @@ def main():
time.sleep(1)
else:
start_backend(**args_dict)
start_server(
restart_requested = start_server(
open_browser=not args_dict["disable_browser"],
port=BasicConfig.port,
)
if restart_requested:
print_status("[Main] Restart requested, cleaning up...", "info")
cleanup_for_restart()
return
else:
start_backend(**args_dict)
start_server(
# 启动服务器默认支持WebSocket触发重启
restart_requested = start_server(
open_browser=not args_dict["disable_browser"],
port=BasicConfig.port,
)

176
unilabos/app/utils.py Normal file
View File

@@ -0,0 +1,176 @@
"""
UniLabOS 应用工具函数
提供清理、重启等工具函数
"""
import glob
import os
import shutil
import sys
def patch_rclpy_dll_windows():
"""在 Windows + conda 环境下为 rclpy 打 DLL 加载补丁"""
if sys.platform != "win32" or not os.environ.get("CONDA_PREFIX"):
return
try:
import rclpy
return
except ImportError as e:
if not str(e).startswith("DLL load failed"):
return
cp = os.environ["CONDA_PREFIX"]
impl = os.path.join(cp, "Lib", "site-packages", "rclpy", "impl", "implementation_singleton.py")
pyd = glob.glob(os.path.join(cp, "Lib", "site-packages", "rclpy", "_rclpy_pybind11*.pyd"))
if not os.path.exists(impl) or not pyd:
return
with open(impl, "r", encoding="utf-8") as f:
content = f.read()
lib_bin = os.path.join(cp, "Library", "bin").replace("\\", "/")
patch = f'# UniLabOS DLL Patch\nimport os,ctypes\nos.add_dll_directory("{lib_bin}") if hasattr(os,"add_dll_directory") else None\ntry: ctypes.CDLL("{pyd[0].replace(chr(92),"/")}")\nexcept: pass\n# End Patch\n'
shutil.copy2(impl, impl + ".bak")
with open(impl, "w", encoding="utf-8") as f:
f.write(patch + content)
patch_rclpy_dll_windows()
import gc
import threading
import time
from unilabos.utils.banner_print import print_status
def cleanup_for_restart() -> bool:
"""
Clean up all resources for restart without exiting the process.
This function prepares the system for re-initialization by:
1. Stopping all communication clients
2. Destroying ROS nodes
3. Resetting singletons
4. Waiting for threads to finish
Returns:
bool: True if cleanup was successful, False otherwise
"""
print_status("[Restart] Starting cleanup for restart...", "info")
# Step 1: Stop WebSocket communication client
print_status("[Restart] Step 1: Stopping WebSocket client...", "info")
try:
from unilabos.app.communication import get_communication_client
comm_client = get_communication_client()
if comm_client is not None:
comm_client.stop()
print_status("[Restart] WebSocket client stopped", "info")
except Exception as e:
print_status(f"[Restart] Error stopping WebSocket: {e}", "warning")
# Step 2: Get HostNode and cleanup ROS
print_status("[Restart] Step 2: Cleaning up ROS nodes...", "info")
try:
from unilabos.ros.nodes.presets.host_node import HostNode
import rclpy
from rclpy.timer import Timer
host_instance = HostNode.get_instance(timeout=5)
if host_instance is not None:
print_status(f"[Restart] Found HostNode: {host_instance.device_id}", "info")
# Gracefully shutdown background threads
print_status("[Restart] Shutting down background threads...", "info")
HostNode.shutdown_background_threads(timeout=5.0)
print_status("[Restart] Background threads shutdown complete", "info")
# Stop discovery timer
if hasattr(host_instance, "_discovery_timer") and isinstance(host_instance._discovery_timer, Timer):
host_instance._discovery_timer.cancel()
print_status("[Restart] Discovery timer cancelled", "info")
# Destroy device nodes
device_count = len(host_instance.devices_instances)
print_status(f"[Restart] Destroying {device_count} device instances...", "info")
for device_id, device_node in list(host_instance.devices_instances.items()):
try:
if hasattr(device_node, "ros_node_instance") and device_node.ros_node_instance is not None:
device_node.ros_node_instance.destroy_node()
print_status(f"[Restart] Device {device_id} destroyed", "info")
except Exception as e:
print_status(f"[Restart] Error destroying device {device_id}: {e}", "warning")
# Clear devices instances
host_instance.devices_instances.clear()
host_instance.devices_names.clear()
# Destroy host node
try:
host_instance.destroy_node()
print_status("[Restart] HostNode destroyed", "info")
except Exception as e:
print_status(f"[Restart] Error destroying HostNode: {e}", "warning")
# Reset HostNode state
HostNode.reset_state()
print_status("[Restart] HostNode state reset", "info")
# Shutdown executor first (to stop executor.spin() gracefully)
if hasattr(rclpy, "__executor") and rclpy.__executor is not None:
try:
rclpy.__executor.shutdown()
rclpy.__executor = None # Clear for restart
print_status("[Restart] ROS executor shutdown complete", "info")
except Exception as e:
print_status(f"[Restart] Error shutting down executor: {e}", "warning")
# Shutdown rclpy
if rclpy.ok():
rclpy.shutdown()
print_status("[Restart] rclpy shutdown complete", "info")
except ImportError as e:
print_status(f"[Restart] ROS modules not available: {e}", "warning")
except Exception as e:
print_status(f"[Restart] Error in ROS cleanup: {e}", "warning")
return False
# Step 3: Reset communication client singleton
print_status("[Restart] Step 3: Resetting singletons...", "info")
try:
from unilabos.app import communication
if hasattr(communication, "_communication_client"):
communication._communication_client = None
print_status("[Restart] Communication client singleton reset", "info")
except Exception as e:
print_status(f"[Restart] Error resetting communication singleton: {e}", "warning")
# Step 4: Wait for threads to finish
print_status("[Restart] Step 4: Waiting for threads to finish...", "info")
time.sleep(3) # Give threads time to finish
# Check remaining threads
remaining_threads = []
for t in threading.enumerate():
if t.name != "MainThread" and t.is_alive():
remaining_threads.append(t.name)
if remaining_threads:
print_status(
f"[Restart] Warning: {len(remaining_threads)} threads still running: {remaining_threads}", "warning"
)
else:
print_status("[Restart] All threads stopped", "info")
# Step 5: Force garbage collection
print_status("[Restart] Step 5: Running garbage collection...", "info")
gc.collect()
gc.collect() # Run twice for weak references
print_status("[Restart] Garbage collection complete", "info")
print_status("[Restart] Cleanup complete. Ready for re-initialization.", "info")
return True

View File

@@ -6,12 +6,10 @@ HTTP客户端模块
import json
import os
import time
from threading import Thread
from typing import List, Dict, Any, Optional
import requests
from unilabos.ros.nodes.resource_tracker import ResourceTreeSet
from unilabos.resources.resource_tracker import ResourceTreeSet
from unilabos.utils.log import info
from unilabos.config.config import HTTPConfig, BasicConfig
from unilabos.utils import logger
@@ -300,6 +298,10 @@ class HTTPClient:
)
if response.status_code not in [200, 201]:
logger.error(f"注册资源失败: {response.status_code}, {response.text}")
if response.status_code == 200:
res = response.json()
if "code" in res and res["code"] != 0:
logger.error(f"注册资源失败: {response.text}")
return response
def request_startup_json(self) -> Optional[Dict[str, Any]]:

View File

@@ -58,14 +58,14 @@ class JobResultStore:
feedback=feedback or {},
timestamp=time.time(),
)
logger.debug(f"[JobResultStore] Stored result for job {job_id[:8]}, status={status}")
logger.trace(f"[JobResultStore] Stored result for job {job_id[:8]}, status={status}")
def get_and_remove(self, job_id: str) -> Optional[JobResult]:
"""获取并删除任务结果"""
with self._results_lock:
result = self._results.pop(job_id, None)
if result:
logger.debug(f"[JobResultStore] Retrieved and removed result for job {job_id[:8]}")
logger.trace(f"[JobResultStore] Retrieved and removed result for job {job_id[:8]}")
return result
def get_result(self, job_id: str) -> Optional[JobResult]:

View File

@@ -6,7 +6,6 @@ Web服务器模块
import webbrowser
import uvicorn
from fastapi import FastAPI, Request
from fastapi.middleware.cors import CORSMiddleware
from starlette.responses import Response
@@ -96,7 +95,7 @@ def setup_server() -> FastAPI:
return app
def start_server(host: str = "0.0.0.0", port: int = 8002, open_browser: bool = True) -> None:
def start_server(host: str = "0.0.0.0", port: int = 8002, open_browser: bool = True) -> bool:
"""
启动服务器
@@ -104,7 +103,14 @@ def start_server(host: str = "0.0.0.0", port: int = 8002, open_browser: bool = T
host: 服务器主机
port: 服务器端口
open_browser: 是否自动打开浏览器
Returns:
bool: True if restart was requested, False otherwise
"""
import threading
import time
from uvicorn import Config, Server
# 设置服务器
setup_server()
@@ -123,7 +129,37 @@ def start_server(host: str = "0.0.0.0", port: int = 8002, open_browser: bool = T
# 启动服务器
info(f"[Web] 启动FastAPI服务器: {host}:{port}")
uvicorn.run(app, host=host, port=port, log_config=log_config)
# 使用支持重启的模式
config = Config(app=app, host=host, port=port, log_config=log_config)
server = Server(config)
# 启动服务器线程
server_thread = threading.Thread(target=server.run, daemon=True, name="uvicorn_server")
server_thread.start()
info("[Web] Server started, monitoring for restart requests...")
# 监控重启标志
import unilabos.app.main as main_module
while server_thread.is_alive():
if hasattr(main_module, "_restart_requested") and main_module._restart_requested:
info(
f"[Web] Restart requested via WebSocket, reason: {getattr(main_module, '_restart_reason', 'unknown')}"
)
main_module._restart_requested = False
# 停止服务器
server.should_exit = True
server_thread.join(timeout=5)
info("[Web] Server stopped, ready for restart")
return True
time.sleep(1)
return False
# 当脚本直接运行时启动服务器

View File

@@ -23,7 +23,7 @@ from typing import Optional, Dict, Any, List
from urllib.parse import urlparse
from enum import Enum
from jedi.inference.gradual.typing import TypedDict
from typing_extensions import TypedDict
from unilabos.app.model import JobAddReq
from unilabos.ros.nodes.presets.host_node import HostNode
@@ -154,7 +154,7 @@ class DeviceActionManager:
job_info.set_ready_timeout(10) # 设置10秒超时
self.active_jobs[device_key] = job_info
job_log = format_job_log(job_info.job_id, job_info.task_id, job_info.device_id, job_info.action_name)
logger.info(f"[DeviceActionManager] Job {job_log} can start immediately for {device_key}")
logger.trace(f"[DeviceActionManager] Job {job_log} can start immediately for {device_key}")
return True
def start_job(self, job_id: str) -> bool:
@@ -210,8 +210,9 @@ class DeviceActionManager:
job_info.update_timestamp()
# 从all_jobs中移除已结束的job
del self.all_jobs[job_id]
job_log = format_job_log(job_info.job_id, job_info.task_id, job_info.device_id, job_info.action_name)
logger.info(f"[DeviceActionManager] Job {job_log} ended for {device_key}")
# job_log = format_job_log(job_info.job_id, job_info.task_id, job_info.device_id, job_info.action_name)
# logger.debug(f"[DeviceActionManager] Job {job_log} ended for {device_key}")
pass
else:
job_log = format_job_log(job_info.job_id, job_info.task_id, job_info.device_id, job_info.action_name)
logger.warning(f"[DeviceActionManager] Job {job_log} was not active for {device_key}")
@@ -227,7 +228,7 @@ class DeviceActionManager:
next_job_log = format_job_log(
next_job.job_id, next_job.task_id, next_job.device_id, next_job.action_name
)
logger.info(f"[DeviceActionManager] Next job {next_job_log} can start for {device_key}")
logger.trace(f"[DeviceActionManager] Next job {next_job_log} can start for {device_key}")
return next_job
return None
@@ -268,7 +269,7 @@ class DeviceActionManager:
# 从all_jobs中移除
del self.all_jobs[job_id]
job_log = format_job_log(job_info.job_id, job_info.task_id, job_info.device_id, job_info.action_name)
logger.info(f"[DeviceActionManager] Active job {job_log} cancelled for {device_key}")
logger.trace(f"[DeviceActionManager] Active job {job_log} cancelled for {device_key}")
# 启动下一个任务
if device_key in self.device_queues and self.device_queues[device_key]:
@@ -281,7 +282,7 @@ class DeviceActionManager:
next_job_log = format_job_log(
next_job.job_id, next_job.task_id, next_job.device_id, next_job.action_name
)
logger.info(f"[DeviceActionManager] Next job {next_job_log} can start after cancel")
logger.trace(f"[DeviceActionManager] Next job {next_job_log} can start after cancel")
return True
# 如果是排队中的任务
@@ -295,7 +296,7 @@ class DeviceActionManager:
job_log = format_job_log(
job_info.job_id, job_info.task_id, job_info.device_id, job_info.action_name
)
logger.info(f"[DeviceActionManager] Queued job {job_log} cancelled for {device_key}")
logger.trace(f"[DeviceActionManager] Queued job {job_log} cancelled for {device_key}")
return True
job_log = format_job_log(job_info.job_id, job_info.task_id, job_info.device_id, job_info.action_name)
@@ -359,7 +360,7 @@ class MessageProcessor:
self.device_manager = device_manager
self.queue_processor = None # 延迟设置
self.websocket_client = None # 延迟设置
self.session_id = ""
self.session_id = str(uuid.uuid4())[:6] # 产生一个随机的session_id
# WebSocket连接
self.websocket = None
@@ -421,7 +422,7 @@ class MessageProcessor:
ssl_context = ssl_module.create_default_context()
ws_logger = logging.getLogger("websockets.client")
# 日志级别已在 unilabos.utils.log 中统一配置为 WARNING
ws_logger.setLevel(logging.INFO)
async with websockets.connect(
self.websocket_url,
@@ -488,7 +489,20 @@ class MessageProcessor:
async for message in self.websocket:
try:
data = json.loads(message)
await self._process_message(data)
message_type = data.get("action", "")
message_data = data.get("data")
if self.session_id and self.session_id == data.get("edge_session"):
await self._process_message(message_type, message_data)
else:
if message_type.endswith("_material"):
logger.trace(
f"[MessageProcessor] 收到一条归属 {data.get('edge_session')} 的旧消息:{data}"
)
logger.debug(
f"[MessageProcessor] 跳过了一条归属 {data.get('edge_session')} 的旧消息: {data.get('action')}"
)
else:
await self._process_message(message_type, message_data)
except json.JSONDecodeError:
logger.error(f"[MessageProcessor] Invalid JSON received: {message}")
except Exception as e:
@@ -554,12 +568,9 @@ class MessageProcessor:
finally:
logger.debug("[MessageProcessor] Send handler stopped")
async def _process_message(self, data: Dict[str, Any]):
async def _process_message(self, message_type: str, message_data: Dict[str, Any]):
"""处理收到的消息"""
message_type = data.get("action", "")
message_data = data.get("data")
logger.debug(f"[MessageProcessor] Processing message: {message_type}")
logger.trace(f"[MessageProcessor] Processing message: {message_type}")
try:
if message_type == "pong":
@@ -571,14 +582,19 @@ class MessageProcessor:
elif message_type == "cancel_action" or message_type == "cancel_task":
await self._handle_cancel_action(message_data)
elif message_type == "add_material":
# noinspection PyTypeChecker
await self._handle_resource_tree_update(message_data, "add")
elif message_type == "update_material":
# noinspection PyTypeChecker
await self._handle_resource_tree_update(message_data, "update")
elif message_type == "remove_material":
# noinspection PyTypeChecker
await self._handle_resource_tree_update(message_data, "remove")
elif message_type == "session_id":
self.session_id = message_data.get("session_id")
logger.info(f"[MessageProcessor] Session ID: {self.session_id}")
# elif message_type == "session_id":
# self.session_id = message_data.get("session_id")
# logger.info(f"[MessageProcessor] Session ID: {self.session_id}")
elif message_type == "request_restart":
await self._handle_request_restart(message_data)
else:
logger.debug(f"[MessageProcessor] Unknown message type: {message_type}")
@@ -626,13 +642,13 @@ class MessageProcessor:
await self._send_action_state_response(
device_id, action_name, task_id, job_id, "query_action_status", True, 0
)
logger.info(f"[MessageProcessor] Job {job_log} can start immediately")
logger.trace(f"[MessageProcessor] Job {job_log} can start immediately")
else:
# 需要排队
await self._send_action_state_response(
device_id, action_name, task_id, job_id, "query_action_status", False, 10
)
logger.info(f"[MessageProcessor] Job {job_log} queued")
logger.trace(f"[MessageProcessor] Job {job_log} queued")
# 通知QueueProcessor有新的队列更新
if self.queue_processor:
@@ -836,9 +852,7 @@ class MessageProcessor:
device_action_groups[key_add] = []
device_action_groups[key_add].append(item["uuid"])
logger.info(
f"[MessageProcessor] Resource migrated: {item['uuid'][:8]} from {device_old_id} to {device_id}"
)
logger.info(f"[资源同步] 跨站Transfer: {item['uuid'][:8]} from {device_old_id} to {device_id}")
else:
# 正常update
key = (device_id, "update")
@@ -852,11 +866,13 @@ class MessageProcessor:
device_action_groups[key] = []
device_action_groups[key].append(item["uuid"])
logger.info(f"触发物料更新 {action} 分组数量: {len(device_action_groups)}, 总数量: {len(resource_uuid_list)}")
logger.trace(
f"[资源同步] 动作 {action} 分组数量: {len(device_action_groups)}, 总数量: {len(resource_uuid_list)}"
)
# 为每个(device_id, action)创建独立的更新线程
for (device_id, actual_action), items in device_action_groups.items():
logger.info(f"设备 {device_id} 物料更新 {actual_action} 数量: {len(items)}")
logger.trace(f"[资源同步] {device_id} 物料动作 {actual_action} 数量: {len(items)}")
def _notify_resource_tree(dev_id, act, item_list):
try:
@@ -888,6 +904,51 @@ class MessageProcessor:
)
thread.start()
async def _handle_request_restart(self, data: Dict[str, Any]):
"""
处理重启请求
当LabGo发送request_restart时执行清理并触发重启
"""
reason = data.get("reason", "unknown")
delay = data.get("delay", 2) # 默认延迟2秒
logger.info(f"[MessageProcessor] Received restart request, reason: {reason}, delay: {delay}s")
# 发送确认消息
if self.websocket_client:
await self.websocket_client.send_message(
{"action": "restart_acknowledged", "data": {"reason": reason, "delay": delay}}
)
# 设置全局重启标志
import unilabos.app.main as main_module
main_module._restart_requested = True
main_module._restart_reason = reason
# 延迟后执行清理
await asyncio.sleep(delay)
# 在新线程中执行清理,避免阻塞当前事件循环
def do_cleanup():
import time
time.sleep(0.5) # 给当前消息处理完成的时间
logger.info(f"[MessageProcessor] Starting cleanup for restart, reason: {reason}")
try:
from unilabos.app.utils import cleanup_for_restart
if cleanup_for_restart():
logger.info("[MessageProcessor] Cleanup successful, main() will restart")
else:
logger.error("[MessageProcessor] Cleanup failed")
except Exception as e:
logger.error(f"[MessageProcessor] Error during cleanup: {e}")
cleanup_thread = threading.Thread(target=do_cleanup, name="RestartCleanupThread", daemon=True)
cleanup_thread.start()
logger.info(f"[MessageProcessor] Restart cleanup scheduled")
async def _send_action_state_response(
self, device_id: str, action_name: str, task_id: str, job_id: str, typ: str, free: bool, need_more: int
):
@@ -1074,7 +1135,7 @@ class QueueProcessor:
success = self.message_processor.send_message(message)
job_log = format_job_log(job_info.job_id, job_info.task_id, job_info.device_id, job_info.action_name)
if success:
logger.debug(f"[QueueProcessor] Sent busy/need_more for queued job {job_log}")
logger.trace(f"[QueueProcessor] Sent busy/need_more for queued job {job_log}")
else:
logger.warning(f"[QueueProcessor] Failed to send busy status for job {job_log}")
@@ -1097,7 +1158,7 @@ class QueueProcessor:
job_info.action_name,
)
logger.info(f"[QueueProcessor] Job {job_log} completed with status: {status}")
logger.trace(f"[QueueProcessor] Job {job_log} completed with status: {status}")
# 结束任务,获取下一个可执行的任务
next_job = self.device_manager.end_job(job_id)
@@ -1117,8 +1178,8 @@ class QueueProcessor:
},
}
self.message_processor.send_message(message)
next_job_log = format_job_log(next_job.job_id, next_job.task_id, next_job.device_id, next_job.action_name)
logger.info(f"[QueueProcessor] Notified next job {next_job_log} can start")
# next_job_log = format_job_log(next_job.job_id, next_job.task_id, next_job.device_id, next_job.action_name)
# logger.debug(f"[QueueProcessor] Notified next job {next_job_log} can start")
# 立即触发下一轮状态检查
self.notify_queue_update()
@@ -1260,7 +1321,7 @@ class WebSocketClient(BaseCommunicationClient):
except (KeyError, AttributeError):
logger.warning(f"[WebSocketClient] Failed to remove job {item.job_id} from HostNode status")
logger.info(f"[WebSocketClient] Intercepting final status for job_id: {item.job_id} - {status}")
# logger.debug(f"[WebSocketClient] Intercepting final status for job_id: {item.job_id} - {status}")
# 通知队列处理器job完成包括timeout的job
self.queue_processor.handle_job_completed(item.job_id, status)
@@ -1282,7 +1343,7 @@ class WebSocketClient(BaseCommunicationClient):
self.message_processor.send_message(message)
job_log = format_job_log(item.job_id, item.task_id, item.device_id, item.action_name)
logger.debug(f"[WebSocketClient] Job status published: {job_log} - {status}")
logger.trace(f"[WebSocketClient] Job status published: {job_log} - {status}")
def send_ping(self, ping_id: str, timestamp: float) -> None:
"""发送ping消息"""
@@ -1313,17 +1374,59 @@ class WebSocketClient(BaseCommunicationClient):
logger.warning(f"[WebSocketClient] Failed to cancel job {job_log}")
def publish_host_ready(self) -> None:
"""发布host_node ready信号"""
"""发布host_node ready信号,包含设备和动作信息"""
if self.is_disabled or not self.is_connected():
logger.debug("[WebSocketClient] Not connected, cannot publish host ready signal")
return
# 收集设备信息
devices = []
machine_name = BasicConfig.machine_name
try:
host_node = HostNode.get_instance(0)
if host_node:
# 获取设备信息
for device_id, namespace in host_node.devices_names.items():
device_key = (
f"{namespace}/{device_id}" if namespace.startswith("/") else f"/{namespace}/{device_id}"
)
is_online = device_key in host_node._online_devices
# 获取设备的动作信息
actions = {}
for action_id, client in host_node._action_clients.items():
# action_id 格式: /namespace/device_id/action_name
if device_id in action_id:
action_name = action_id.split("/")[-1]
actions[action_name] = {
"action_path": action_id,
"action_type": str(type(client).__name__),
}
devices.append(
{
"device_id": device_id,
"namespace": namespace,
"device_key": device_key,
"is_online": is_online,
"machine_name": host_node.device_machine_names.get(device_id, machine_name),
"actions": actions,
}
)
logger.info(f"[WebSocketClient] Collected {len(devices)} devices for host_ready")
except Exception as e:
logger.warning(f"[WebSocketClient] Error collecting device info: {e}")
message = {
"action": "host_node_ready",
"data": {
"status": "ready",
"timestamp": time.time(),
"machine_name": machine_name,
"devices": devices,
},
}
self.message_processor.send_message(message)
logger.info("[WebSocketClient] Host node ready signal published")
logger.info(f"[WebSocketClient] Host node ready signal published with {len(devices)} devices")

View File

@@ -16,11 +16,13 @@ class BasicConfig:
upload_registry = False
machine_name = "undefined"
vis_2d_enable = False
no_update_feedback = False
enable_resource_load = True
communication_protocol = "websocket"
startup_json_path = None # 填写绝对路径
disable_browser = False # 禁止浏览器自动打开
port = 8002 # 本地HTTP服务
check_mode = False # CI 检查模式,用于验证 registry 导入和文件一致性
# 'TRACE', 'DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL'
log_level: Literal["TRACE", "DEBUG", "INFO", "WARNING", "ERROR", "CRITICAL"] = "DEBUG"

View File

@@ -6,7 +6,7 @@ Coin Cell Assembly Workstation
"""
from typing import Dict, Any, List, Optional, Union
from unilabos.ros.nodes.resource_tracker import DeviceNodeResourceTracker
from unilabos.resources.resource_tracker import DeviceNodeResourceTracker
from unilabos.device_comms.workstation_base import WorkstationBase, WorkflowInfo
from unilabos.device_comms.workstation_communication import (
WorkstationCommunicationBase, CommunicationConfig, CommunicationProtocol, CoinCellCommunication
@@ -61,7 +61,7 @@ class CoinCellAssemblyWorkstation(WorkstationBase):
# 创建资源跟踪器(如果没有提供)
if resource_tracker is None:
from unilabos.ros.nodes.resource_tracker import DeviceNodeResourceTracker
from unilabos.resources.resource_tracker import DeviceNodeResourceTracker
resource_tracker = DeviceNodeResourceTracker()
# 初始化基类

View File

@@ -4,7 +4,8 @@ import traceback
from typing import Any, Union, List, Dict, Callable, Optional, Tuple
from pydantic import BaseModel
from pymodbus.client.sync import ModbusSerialClient, ModbusTcpClient
from pymodbus.client import ModbusSerialClient, ModbusTcpClient
from pymodbus.framer import FramerType
from typing import TypedDict
from unilabos.device_comms.modbus_plc.modbus import DeviceType, HoldRegister, Coil, InputRegister, DiscreteInputs, DataType, WorderOrder
@@ -402,7 +403,7 @@ class TCPClient(BaseClient):
class RTUClient(BaseClient):
def __init__(self, port: str, baudrate: int, timeout: int):
super().__init__()
self._set_client(ModbusSerialClient(method='rtu', port=port, baudrate=baudrate, timeout=timeout))
self._set_client(ModbusSerialClient(framer=FramerType.RTU, port=port, baudrate=baudrate, timeout=timeout))
self._connect()
if __name__ == '__main__':

View File

@@ -1,26 +1,12 @@
# coding=utf-8
from enum import Enum
from abc import ABC, abstractmethod
from typing import Tuple, Union, Optional, TYPE_CHECKING
from pymodbus.payload import BinaryPayloadDecoder, BinaryPayloadBuilder
from pymodbus.constants import Endian
if TYPE_CHECKING:
from pymodbus.client.sync import ModbusSerialClient, ModbusTcpClient
# Define DataType enum for pymodbus 2.5.3 compatibility
class DataType(Enum):
INT16 = "int16"
UINT16 = "uint16"
INT32 = "int32"
UINT32 = "uint32"
INT64 = "int64"
UINT64 = "uint64"
FLOAT32 = "float32"
FLOAT64 = "float64"
STRING = "string"
BOOL = "bool"
from pymodbus.client import ModbusBaseSyncClient
from pymodbus.client.mixin import ModbusClientMixin
from typing import Tuple, Union, Optional
DataType = ModbusClientMixin.DATATYPE
class WorderOrder(Enum):
BIG = "big"
@@ -33,96 +19,8 @@ class DeviceType(Enum):
INPUT_REGISTER = 'input_register'
def _convert_from_registers(registers, data_type: DataType, word_order: str = 'big'):
"""Convert registers to a value using BinaryPayloadDecoder.
Args:
registers: List of register values
data_type: DataType enum specifying the target data type
word_order: 'big' or 'little' endian
Returns:
Converted value
"""
# Determine byte and word order based on word_order parameter
if word_order == 'little':
byte_order = Endian.Little
word_order_enum = Endian.Little
else:
byte_order = Endian.Big
word_order_enum = Endian.Big
decoder = BinaryPayloadDecoder.fromRegisters(registers, byteorder=byte_order, wordorder=word_order_enum)
if data_type == DataType.INT16:
return decoder.decode_16bit_int()
elif data_type == DataType.UINT16:
return decoder.decode_16bit_uint()
elif data_type == DataType.INT32:
return decoder.decode_32bit_int()
elif data_type == DataType.UINT32:
return decoder.decode_32bit_uint()
elif data_type == DataType.INT64:
return decoder.decode_64bit_int()
elif data_type == DataType.UINT64:
return decoder.decode_64bit_uint()
elif data_type == DataType.FLOAT32:
return decoder.decode_32bit_float()
elif data_type == DataType.FLOAT64:
return decoder.decode_64bit_float()
elif data_type == DataType.STRING:
return decoder.decode_string(len(registers) * 2)
else:
raise ValueError(f"Unsupported data type: {data_type}")
def _convert_to_registers(value, data_type: DataType, word_order: str = 'little'):
"""Convert a value to registers using BinaryPayloadBuilder.
Args:
value: Value to convert
data_type: DataType enum specifying the source data type
word_order: 'big' or 'little' endian
Returns:
List of register values
"""
# Determine byte and word order based on word_order parameter
if word_order == 'little':
byte_order = Endian.Little
word_order_enum = Endian.Little
else:
byte_order = Endian.Big
word_order_enum = Endian.Big
builder = BinaryPayloadBuilder(byteorder=byte_order, wordorder=word_order_enum)
if data_type == DataType.INT16:
builder.add_16bit_int(value)
elif data_type == DataType.UINT16:
builder.add_16bit_uint(value)
elif data_type == DataType.INT32:
builder.add_32bit_int(value)
elif data_type == DataType.UINT32:
builder.add_32bit_uint(value)
elif data_type == DataType.INT64:
builder.add_64bit_int(value)
elif data_type == DataType.UINT64:
builder.add_64bit_uint(value)
elif data_type == DataType.FLOAT32:
builder.add_32bit_float(value)
elif data_type == DataType.FLOAT64:
builder.add_64bit_float(value)
elif data_type == DataType.STRING:
builder.add_string(value)
else:
raise ValueError(f"Unsupported data type: {data_type}")
return builder.to_registers()
class Base(ABC):
def __init__(self, client, name: str, address: int, typ: DeviceType, data_type):
def __init__(self, client: ModbusBaseSyncClient, name: str, address: int, typ: DeviceType, data_type: DataType):
self._address: int = address
self._client = client
self._name = name
@@ -160,11 +58,7 @@ class Coil(Base):
count = value,
slave = slave)
# 检查是否读取出错
if resp.isError():
return [], True
return resp.bits, False
return resp.bits, resp.isError()
def write(self,value: Union[int, float, bool, str, list[bool], list[int], list[float]], data_type: Optional[DataType ]= None, word_order: WorderOrder = WorderOrder.LITTLE, slave = 1) -> bool:
if isinstance(value, list):
@@ -197,18 +91,8 @@ class DiscreteInputs(Base):
count = value,
slave = slave)
# 检查是否读取出错
if resp.isError():
# 根据数据类型返回默认值
if data_type in [DataType.FLOAT32, DataType.FLOAT64]:
return 0.0, True
elif data_type == DataType.STRING:
return "", True
else:
return 0, True
# noinspection PyTypeChecker
return _convert_from_registers(resp.registers, data_type, word_order=word_order.value), False
return self._client.convert_from_registers(resp.registers, data_type, word_order=word_order.value), resp.isError()
def write(self,value: Union[int, float, bool, str, list[bool], list[int], list[float]], data_type: Optional[DataType ]= None, word_order: WorderOrder = WorderOrder.LITTLE, slave = 1) -> bool:
raise ValueError('discrete inputs only support read')
@@ -228,19 +112,8 @@ class HoldRegister(Base):
address = self.address,
count = value,
slave = slave)
# 检查是否读取出错
if resp.isError():
# 根据数据类型返回默认值
if data_type in [DataType.FLOAT32, DataType.FLOAT64]:
return 0.0, True
elif data_type == DataType.STRING:
return "", True
else:
return 0, True
# noinspection PyTypeChecker
return _convert_from_registers(resp.registers, data_type, word_order=word_order.value), False
return self._client.convert_from_registers(resp.registers, data_type, word_order=word_order.value), resp.isError()
def write(self,value: Union[int, float, bool, str, list[bool], list[int], list[float]], data_type: Optional[DataType ]= None, word_order: WorderOrder = WorderOrder.LITTLE, slave = 1) -> bool:
@@ -259,7 +132,7 @@ class HoldRegister(Base):
return self._client.write_register(self.address, value, slave= slave).isError()
else:
# noinspection PyTypeChecker
encoder_resp = _convert_to_registers(value, data_type=data_type, word_order=word_order.value)
encoder_resp = self._client.convert_to_registers(value, data_type=data_type, word_order=word_order.value)
return self._client.write_registers(self.address, encoder_resp, slave=slave).isError()
@@ -280,19 +153,8 @@ class InputRegister(Base):
address = self.address,
count = value,
slave = slave)
# 检查是否读取出错
if resp.isError():
# 根据数据类型返回默认值
if data_type in [DataType.FLOAT32, DataType.FLOAT64]:
return 0.0, True
elif data_type == DataType.STRING:
return "", True
else:
return 0, True
# noinspection PyTypeChecker
return _convert_from_registers(resp.registers, data_type, word_order=word_order.value), False
return self._client.convert_from_registers(resp.registers, data_type, word_order=word_order.value), resp.isError()
def write(self,value: Union[int, float, bool, str, list[bool], list[int], list[float]], data_type: Optional[DataType ]= None, word_order: WorderOrder = WorderOrder.LITTLE, slave = 1) -> bool:
raise ValueError('input register only support read')

File diff suppressed because it is too large Load Diff

View File

@@ -3,7 +3,7 @@ from enum import Enum
from abc import ABC, abstractmethod
from typing import Tuple, Union, Optional, Any, List
from opcua import Client, Node
from opcua import Client, Node, ua
from opcua.ua import NodeId, NodeClass, VariantType
@@ -43,27 +43,72 @@ class Base(ABC):
self._type = typ
self._data_type = data_type
self._node: Optional[Node] = None
def _get_node(self) -> Node:
if self._node is None:
try:
# 检查是否是NumericNodeId(ns=X;i=Y)格式
if "NumericNodeId" in self._node_id:
# 从字符串中提取命名空间和标识符
import re
match = re.search(r'ns=(\d+);i=(\d+)', self._node_id)
if match:
ns = int(match.group(1))
identifier = int(match.group(2))
node_id = NodeId(identifier, ns)
self._node = self._client.get_node(node_id)
# 尝试多种 NodeId 字符串格式解析,兼容不同服务器/库的输出
# 可能的格式示例: 'ns=2;i=1234', 'ns=2;s=SomeString',
# 'StringNodeId(ns=4;s=OPC|变量名)', 'NumericNodeId(ns=2;i=1234)' 等
import re
nid = self._node_id
# 如果已经是 NodeId/Node 对象(库用户可能传入),直接使用
try:
from opcua.ua import NodeId as UaNodeId
if isinstance(nid, UaNodeId):
self._node = self._client.get_node(nid)
return self._node
except Exception:
# 若导入或类型判断失败,则继续下一步
pass
# 直接以字符串形式处理
if isinstance(nid, str):
nid = nid.strip()
# 处理包含类名的格式,如 'StringNodeId(ns=4;s=...)' 或 'NumericNodeId(ns=2;i=...)'
# 提取括号内的内容
match_wrapped = re.match(r'(String|Numeric|Byte|Guid|TwoByteNode|FourByteNode)NodeId\((.*)\)', nid)
if match_wrapped:
# 提取括号内的实际 node_id 字符串
nid = match_wrapped.group(2).strip()
# 常见短格式 'ns=2;i=1234' 或 'ns=2;s=SomeString'
if re.match(r'^ns=\d+;[is]=', nid):
self._node = self._client.get_node(nid)
else:
raise ValueError(f"无法解析节点ID: {self._node_id}")
# 尝试提取 ns 和 i 或 s
# 对于字符串标识符,可能包含特殊字符,使用非贪婪匹配
m_num = re.search(r'ns=(\d+);i=(\d+)', nid)
m_str = re.search(r'ns=(\d+);s=(.+?)(?:\)|$)', nid)
if m_num:
ns = int(m_num.group(1))
identifier = int(m_num.group(2))
node_id = NodeId(identifier, ns)
self._node = self._client.get_node(node_id)
elif m_str:
ns = int(m_str.group(1))
identifier = m_str.group(2).strip()
# 对于字符串标识符,直接使用字符串格式
node_id_str = f"ns={ns};s={identifier}"
self._node = self._client.get_node(node_id_str)
else:
# 回退:尝试直接传入字符串(有些实现接受其它格式)
try:
self._node = self._client.get_node(self._node_id)
except Exception as e:
# 输出更详细的错误信息供调试
print(f"获取节点失败(尝试直接字符串): {self._node_id}, 错误: {e}")
raise
else:
# 直接使用节点ID字符串
# 非字符串,尝试直接使用
self._node = self._client.get_node(self._node_id)
except Exception as e:
print(f"获取节点失败: {self._node_id}, 错误: {e}")
# 添加额外提示,帮助定位 BadNodeIdUnknown 问题
print("提示: 请确认该 node_id 是否来自当前连接的服务器地址空间," \
"以及 CSV/配置中名称与服务器 BrowseName 是否匹配。")
raise
return self._node
@@ -71,16 +116,16 @@ class Base(ABC):
def read(self) -> Tuple[Any, bool]:
"""读取节点值,返回(值, 是否出错)"""
pass
@abstractmethod
def write(self, value: Any) -> bool:
"""写入节点值,返回是否出错"""
pass
@property
def type(self) -> NodeType:
return self._type
@property
def node_id(self) -> str:
return self._node_id
@@ -104,7 +149,56 @@ class Variable(Base):
def write(self, value: Any) -> bool:
try:
self._get_node().set_value(value)
# 如果声明了数据类型,则尝试转换并使用对应的 Variant 写入
coerced = value
try:
if self._data_type is not None:
# 基于声明的数据类型做简单类型转换
dt = self._data_type
if dt in (DataType.SBYTE, DataType.BYTE, DataType.INT16, DataType.UINT16,
DataType.INT32, DataType.UINT32, DataType.INT64, DataType.UINT64):
# 数值类型 -> int
if isinstance(value, str):
coerced = int(value)
else:
coerced = int(value)
elif dt in (DataType.FLOAT, DataType.DOUBLE):
if isinstance(value, str):
coerced = float(value)
else:
coerced = float(value)
elif dt == DataType.BOOLEAN:
if isinstance(value, str):
v = value.strip().lower()
if v in ("true", "1", "yes", "on"):
coerced = True
elif v in ("false", "0", "no", "off"):
coerced = False
else:
coerced = bool(value)
else:
coerced = bool(value)
elif dt == DataType.STRING or dt == DataType.BYTESTRING or dt == DataType.DATETIME:
coerced = str(value)
# 使用 ua.Variant 明确指定 VariantType
try:
variant = ua.Variant(coerced, dt.value)
self._get_node().set_value(variant)
except Exception:
# 回退:有些 set_value 实现接受 (value, variant_type)
try:
self._get_node().set_value(coerced, dt.value)
except Exception:
# 最后回退到直接写入(保持兼容性)
self._get_node().set_value(coerced)
else:
# 未声明数据类型,直接写入
self._get_node().set_value(value)
except Exception:
# 若在转换或按数据类型写入失败,尝试直接写入原始值并让上层捕获错误
self._get_node().set_value(value)
return False
except Exception as e:
print(f"写入变量 {self._name} 失败: {e}")
@@ -116,24 +210,54 @@ class Method(Base):
super().__init__(client, name, node_id, NodeType.METHOD, data_type)
self._parent_node_id = parent_node_id
self._parent_node = None
def _get_parent_node(self) -> Node:
if self._parent_node is None:
try:
# 检查是否是NumericNodeId(ns=X;i=Y)格式
if "NumericNodeId" in self._parent_node_id:
# 从字符串中提取命名空间和标识符
import re
match = re.search(r'ns=(\d+);i=(\d+)', self._parent_node_id)
if match:
ns = int(match.group(1))
identifier = int(match.group(2))
node_id = NodeId(identifier, ns)
self._parent_node = self._client.get_node(node_id)
# 处理父节点ID使用与_get_node相同的解析逻辑
import re
nid = self._parent_node_id
# 如果已经是 NodeId 对象,直接使用
try:
from opcua.ua import NodeId as UaNodeId
if isinstance(nid, UaNodeId):
self._parent_node = self._client.get_node(nid)
return self._parent_node
except Exception:
pass
# 字符串处理
if isinstance(nid, str):
nid = nid.strip()
# 处理包含类名的格式
match_wrapped = re.match(r'(String|Numeric|Byte|Guid|TwoByteNode|FourByteNode)NodeId\((.*)\)', nid)
if match_wrapped:
nid = match_wrapped.group(2).strip()
# 常见短格式
if re.match(r'^ns=\d+;[is]=', nid):
self._parent_node = self._client.get_node(nid)
else:
raise ValueError(f"无法解析父节点ID: {self._parent_node_id}")
# 提取 ns 和 i 或 s
m_num = re.search(r'ns=(\d+);i=(\d+)', nid)
m_str = re.search(r'ns=(\d+);s=(.+?)(?:\)|$)', nid)
if m_num:
ns = int(m_num.group(1))
identifier = int(m_num.group(2))
node_id = NodeId(identifier, ns)
self._parent_node = self._client.get_node(node_id)
elif m_str:
ns = int(m_str.group(1))
identifier = m_str.group(2).strip()
node_id_str = f"ns={ns};s={identifier}"
self._parent_node = self._client.get_node(node_id_str)
else:
# 回退
self._parent_node = self._client.get_node(self._parent_node_id)
else:
# 直接使用节点ID字符串
self._parent_node = self._client.get_node(self._parent_node_id)
except Exception as e:
print(f"获取父节点失败: {self._parent_node_id}, 错误: {e}")
@@ -147,7 +271,7 @@ class Method(Base):
def write(self, value: Any) -> bool:
"""方法节点不支持写入操作"""
return True
def call(self, *args) -> Tuple[Any, bool]:
"""调用方法,返回(返回值, 是否出错)"""
try:
@@ -161,7 +285,7 @@ class Method(Base):
class Object(Base):
def __init__(self, client: Client, name: str, node_id: str):
super().__init__(client, name, node_id, NodeType.OBJECT, None)
def read(self) -> Tuple[Any, bool]:
"""对象节点不支持直接读取操作"""
return None, True
@@ -169,7 +293,7 @@ class Object(Base):
def write(self, value: Any) -> bool:
"""对象节点不支持直接写入操作"""
return True
def get_children(self) -> Tuple[List[Node], bool]:
"""获取子节点列表,返回(子节点列表, 是否出错)"""
try:
@@ -177,4 +301,4 @@ class Object(Base):
return children, False
except Exception as e:
print(f"获取对象 {self._name} 的子节点失败: {e}")
return [], True
return [], True

View File

@@ -128,14 +128,21 @@ class ResourceVisualization:
new_dev.set("device_name", node["id"]+"_")
# if node["parent"] is not None:
# new_dev.set("station_name", node["parent"]+'_')
new_dev.set("x",str(float(node["position"]["position"]["x"])/1000))
new_dev.set("y",str(float(node["position"]["position"]["y"])/1000))
new_dev.set("z",str(float(node["position"]["position"]["z"])/1000))
if "position" in node:
new_dev.set("x",str(float(node["position"]["position"]["x"])/1000))
new_dev.set("y",str(float(node["position"]["position"]["y"])/1000))
new_dev.set("z",str(float(node["position"]["position"]["z"])/1000))
if "rotation" in node["config"]:
new_dev.set("rx",str(float(node["config"]["rotation"]["x"])))
new_dev.set("ry",str(float(node["config"]["rotation"]["y"])))
new_dev.set("r",str(float(node["config"]["rotation"]["z"])))
if "pose" in node:
new_dev.set("x",str(float(node["pose"]["position"]["x"])/1000))
new_dev.set("y",str(float(node["pose"]["position"]["y"])/1000))
new_dev.set("z",str(float(node["pose"]["position"]["z"])/1000))
new_dev.set("rx",str(float(node["pose"]["rotation"]["x"])))
new_dev.set("ry",str(float(node["pose"]["rotation"]["y"])))
new_dev.set("r",str(float(node["pose"]["rotation"]["z"])))
if "device_config" in node["config"]:
for key, value in node["config"]["device_config"].items():
new_dev.set(key, str(value))

73
unilabos/devices/LICENSE Normal file
View File

@@ -0,0 +1,73 @@
Uni-Lab-OS软件许可使用准则
本软件使用准则(以下简称"本准则"旨在规范用户在使用Uni-Lab-OS软件以下简称"本软件")过程中的行为和义务。在下载、安装、使用或以任何方式访问本软件之前,请务必仔细阅读并理解以下条款和条件。若您不同意本准则的全部或部分内容,请您立即停止使用本软件。一旦您开始访问、下载、安装、使用本软件,即表示您已阅读、理解并同意接受本准则的约束。
1、使用许可
1.1 本软件的所有权及版权归北京深势科技有限公司(以下简称"深势科技")所有。在遵守本准则的前提下,深势科技特此授予学术用户(以下简称"您")一个全球范围内的、非排他性的、免版权费用的使用许可,可为了满足学术目的而使用本软件。
1.2 本准则下授予的许可仅适用于本软件的二进制代码版本。您不对本软件源代码拥有任何权利。
2、使用限制
2.1 本准则仅授予学术用户出于学术目的使用本软件,任何商业组织、商业机构或其他非学术用户不得使用本软件,如果违反本条款,深势科技将保留一切追诉的权利。
2.2 您将本软件用于任何商业行为,应取得深势科技的商业许可。
2.3 您不得将本软件或任何形式的衍生作品用于任何商业目的,也不得将其出售、出租、转让、分发或以其他方式提供给任何第三方。您必须确保本软件的使用仅限于您个人学术研究,禁止您为任何其他实体的利益使用本软件(无论是否收费)。
2.4 您不得以任何方式修改、破解、反编译、反汇编、反向工程、隔离、分离或以其他方式从任何程序或文档中提取源代码或试图发现本软件的源代码。您不得以任何方式去除、修改或屏蔽本软件中的任何版权、商标或其他专有权利声明。您不得使用本软件进行任何非法活动,包括但不限于侵犯他人的知识产权、隐私权等。
2.5 您同意将本软件仅用于合法的学术目的,且遵守您所在国家或地区的法律法规,您将承担因违反法律法规而产生的一切法律责任。
3、软件所有权
本软件在此仅作使用许可,并非出售。本软件及与软件有关的全部文档的所有权及其他所有权利(包括但不限于知识产权和商业秘密),始终是深势科技的专有财产,您不拥有任何权利,但本准则下被明确授予的有限的使用许可权利除外。
4、衍生作品传播规范
若您传播基于Uni-Lab-OS程序修改形成的作品须同时满足以下全部条件
4.1 作品必须包含显著声明,明确标注修改内容及修改日期;
4.2 作品必须声明本作品依据本许可协议发布;
4.3 必须将整个作品(包括修改部分)作为整体授予获取副本者本许可协议的保障,且该许可将自动延伸适用于作品全组件(无论其以何种形式打包);
4.4 若衍生作品含交互式用户界面每个界面均须显示合规法律声明若原始Uni-Lab-OS程序的交互界面未展示法律声明您的衍生作品可免除此义务。
5、提出建议
您可以对本软件提出建议,前提是:
i您声明并保证该建议未侵害任何第三方的任何知识产权
ii您承认深势科技有权使用该建议但无使用该建议的义务
iii您授予深势科技一项非独占的、不可撤销的、可分许可的、无版权费的、全球范围的著作权许可以复制、分发、传播、公开展示、公开表演、修改、翻译、基于其制作衍生作品、生产、制作、推销、销售、提供销售和/或以其他方式整体或部分地使用该建议和基于其的衍生作品,包括但不限于,通过将该建议整体或部分地纳入深势科技的软件和/或其他软件,以及在现存的或将来任何时候存在的任何媒介中或通过该媒介体现,以及为从事上述活动而授予多个分许可;
iv您特此授予深势科技一项永久的、全球范围的、非独占性的、免费的、免特许权使用费的、不可撤销的专利许可许可其制造、委托制造、使用、要约销售、销售、进口及以其他方式转让该建议和基于其的衍生专利。上述专利许可的适用范围仅限于以下专利权利要求您有权许可的、且仅因您的建议本身或因您的建议与所提交的本软件结合而必然构成侵权的专利权利要求。若任何实体针对您或其他实体提起专利诉讼包括诉讼中的交叉诉讼或反诉主张该建议或您所贡献的软件构成直接或间接专利侵权则依据本协议授予的、针对该建议或软件的任何专利许可自该诉讼提起之日起终止。
v您放弃对该建议的任何权利或主张深势科技无需承担任何义务、版税或基于知识产权或其他方面的限制。
6、引用要求
如您使用本软件获得的成果发表在出版物上您应在成果中承认对Uni-Lab-OS软件的使用并标注权利人名称。引用 Uni-Lab-OS时请使用以下内容
@article{gao2025unilabos,
title = {UniLabOS: An AI-Native Operating System for Autonomous Laboratories},
doi = {10.48550/arXiv.2512.21766},
publisher = {arXiv},
author = {Gao, Jing and Chang, Junhan and Que, Haohui and Xiong, Yanfei and Zhang, Shixiang and Qi, Xianwei and Liu, Zhen and Wang, Jun-Jie and Ding, Qianjun and Li, Xinyu and Pan, Ziwei and Xie, Qiming and Yan, Zhuang and Yan, Junchi and Zhang, Linfeng},
year = {2025}
}
7、保留权利
您认可,所有未被明确授予您的本软件的权利,无论是当前或今后存在的,均由深势科技予以保留,任何未经深势科技明确授权而使用本软件的行为将被视为侵权,深势科技有权追究侵权者的一切法律责任。
8、保密信息
您同意将本软件代码及相关文档视为深势科技的机密信息,您不会向任何第三方提供相关代码,并将采取合理审慎的使用态度来防止本软件代码及相关文档被泄露。
9、无保证
该软件是"按原样"提供的,没有任何明示或暗示的保证,不包含任何代码或规范没有缺陷、适销性、适用于特定目的或不侵犯第三方权利的保证。您同意您自主承担使用本软件或与本准则有关的全部风险。
10、免责条款
在任何情况下,无论基于侵权(包括过失)、合同或其他法律理论,除非适用法律强制规定(如故意或重大过失行为)或另有书面协议,深势科技不对被许可人因软件许可、使用或无法使用软件所致损害承担责任(包括任何性质的直接、间接、特殊、偶发或后果性损害,例如但不限于商誉损失、停工损失、计算机故障或失灵造成的损害,以及其他一切商业损害或损失),即使深势科技已被告知发生此类损害的可能性亦不例外。
被许可人在再分发软件或其衍生作品时,仅能以自身名义独立承担责任进行操作,不得代表深势科技或其他被许可人。
11、终止
如果您以任何方式违反本准则或未能遵守本准则的任何重要条款或条件,则您被授予的所有权利将自动终止。
12、举报
如果您认为有人违反了本准则请向深势科技进行举报深势科技将对您的身份进行严格保密举报邮箱changjh@dp.tech。
13、法律管辖
本准则中的任何内容均不得解释为通过暗示、禁止反悔或其他方式授予本准则中授予的许可或权利以外的任何许可或权利。如果本准则的任何条款被认定为不可执行,则仅在必要的范围内对该条款进行修改,使其可执行。本准则应受中华人民共和国法律管辖,不适用法律冲突条款及《联合国国际货物销售合同公约》,因本准则产生的一切争议由北京市海淀区人民法院管辖。
14、未来版本
深势科技保留不经事先通知随时变更或停止本软件或本准则的权利。
15、语言优先
本准则同时具有中文版本和英文版本,如果英文版本和中文版本有冲突,以中文版本为准。

View File

@@ -0,0 +1,73 @@
Uni-Lab-OS License Agreement
Preamble
This License Agreement (the "Agreement") is instituted to govern user conduct and obligations in relation to the utilization of the Uni-Lab-OS (the "Software"). By accessing, downloading, installing, or utilizing the Software in any manner, you hereby acknowledge that you have meticulously reviewed, comprehended, and consented to be legally bound by the terms herein. If you dissent from any provision of this Agreement, you must forthwith cease all interaction with the Software.
1. Grant of License
1.1 The proprietary rights to the Software are exclusively retained by Beijing DP Technology Co., Ltd. ("DP Technology"). Subject to full compliance with this Agreement, DP Technology hereby grants academic users ("Licensee") a worldwide, non-exclusive, royalty-free license to untilise the Software solely for non-commercial academic pursuits.
1.2 The foregoing license applies exclusively to the Software's executable binary code. No rights whatsoever are conferred to the Software's source code.
2. Usage Restrictions
2.1 This license is restricted to academic users engaging in scholastic activities. Commercial entities, institutions, or any non-academic parties are expressly prohibited from utilizing the Software. Violations of this clause shall entitle DP Technology to pursue all available legal remedies.
2.2 The Licensee shall obtain a commercial license from DP Technology for any commercial use of the Software.
2.3 The Licensee shall not utilise the Software or any derivative works for commercial purposes, nor distribute, sublicense, lease, transfer, or otherwise disseminate the Software to third parties. The Licensee is strictly prohibited from utilizing the Software for the benefit of any third-party entity, whether gratuitously or otherwise.
2.4 Reverse engineering, decompilation, disassembly, code isolation, or any attempt to derive source code from the Software is strictly prohibited. The Licensee shall not alter, circumvent, or remove copyright notices, trademarks, or proprietary legends embedded in the Software. Use of the Software for unlawful activities—including but not limited to intellectual property infringement or privacy violations—is categorically barred.
2.5 The Licensee warrants that the Software shall be utilised solely for lawful academic purposes in compliance with applicable jurisdictional statutes. All legal liabilities arising from noncompliance shall be borne exclusively by the Licensee.
3. Proprietary Rights
This Agreement confers a license to utilise the Software, not a transfer of ownership. All intellectual property rights—including copyrights, patents, trade secrets, and documentation—remain the exclusive dominion of DP Technology. The Licensee acquires no entitlements beyond the limited usage privileges expressly delineated herein.
4. Derivative Work
You may convey a work based on the Software, or the modifications to produce it from the Software, provided that you meet all of these conditions:
4.1 The work must carry prominent notices stating that you modified it, and giving a relevant date.
4.2 The work must carry prominent notices stating that it is released under this License.
4.3 You must license the entire work, as a whole, under this License to anyone who comes into possession of a copy. This License will therefore apply to the whole of the work, and all its parts, regardless of how they are packaged. This License gives no permission to license the work in any other way, but it does not invalidate such permission if you have separately received it.
4.4 If the work has interactive user interfaces, each must display Appropriate Legal Notices; however, if the Software has interactive interfaces that do not display Appropriate Legal Notices, your work need not make them do so.
5. Feedback and Proposals
Licensees may submit proposals, suggestions, or improvements pertaining to the Software ("Feedback") under the following conditions:
(a) Licensee represents and warrants that such Feedback does not infringe upon any third-party intellectual property rights;
(b) Licensee acknowledges that DP Technology reserves the right, but assumes no obligation, to utilize such Feedback;
(c) Licensee irrevocably grants DP Technology a non-exclusive, royalty-free, perpetual, worldwide, sublicensable copyright license to reproduce, distribute, modify, publicly perform or display, translate, create derivative works of, commercialize, and otherwise exploit the Feedback in any medium or format, whether now known or hereafter devised, including the right to grant multiple tiers of sublicenses to enable such activities;
(d) Licensee hereby grants DP Technology a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Feedback and such Derivative Works, where such license applies only to those patent claimss licensable by Licensee that are necessarily infringed by the Feedback(s) alone or by comibination of the Feedback(s) with the Software to which such Feedback(s) were submitted. If any entity institutes patent litigation against Licensee or any other entity (including a cross-claim orcounterclaim in a lawsuit) alleging that the Feedback, or the Software to which you have contributed, constitutes direct or contributory patent infringement, then any patent licenses granted under this Agreement for the Feedback or Software shall terminate as of the date such litigation is filed.
(e) Licensee hereby waives all claims, proprietary rights, or restrictions related to DP Technology's use of such Feedback.
6. Citation Requirement
If academic or research output generated using the Software is published, Licensee must explicitly acknowledge the use of Uni-Lab-OS and attribute ownership to DP Technology. The following citation must be included:
@article{gao2025unilabos,
title = {UniLabOS: An AI-Native Operating System for Autonomous Laboratories},
doi = {10.48550/arXiv.2512.21766},
publisher = {arXiv},
author = {Gao, Jing and Chang, Junhan and Que, Haohui and Xiong, Yanfei and Zhang, Shixiang and Qi, Xianwei and Liu, Zhen and Wang, Jun-Jie and Ding, Qianjun and Li, Xinyu and Pan, Ziwei and Xie, Qiming and Yan, Zhuang and Yan, Junchi and Zhang, Linfeng},
year = {2025}
}
7. Reservation of Rights
All rights not expressly granted herein, whether existing now or arising in the future, are exclusively reserved by DP Technology. Any unauthorized use of the Software beyond the scope of this Agreement constitutes infringement, and DP Technology reserves all legal rights to pursue remedies against violators.
8. Confidentiality
Licensee agrees to treat the Software's code, documentation, and related materials as confidential information. Licensee shall not disclose such materials to third parties and shall employ reasonable safeguards to prevent unauthorized access, dissemination, or misuse.
9. Disclaimer of Warranties
The software is provided "as is," without warranties of any kind, express or implied, including but not limited to warranties of merchantability, fitness for a particular purpose, non-infringement, or error-free operation. Licensee accepts all risks associated with the use of the software.
10. Limitation of Liability
In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall DP Technology be liable to Licensee for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the software (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if DP Technology has been advised of the possibility of such damages.
While redistributing the Software or Derivative Works thereof, Licensee may act only on Licensee's own behalf and on Licensee's sole responsibility, not on behalf of DP Technology or any other Licensee.
11. Termination
All rights granted herein shall terminate immediately and automatically if Licensee materially breaches any provision of this Agreement.
12. Reporting Violations
To report suspected violations of this Agreement, notify DP Technology via the designated email address: changjh@dp.tech. DP Technology shall maintain the confidentiality of the reporter's identity.
13. Governing Law and Dispute Resolution
This Agreement shall be governed by the laws of the People's Republic of China, excluding its conflict of laws principles and the United Nations Convention on Contracts for the International Sale of Goods. Any dispute arising from this Agreement shall be exclusively adjudicated by the Haidian District People's Court in Beijing.
14. Amendments and Updates
DP Technology reserves the right to modify, suspend, or terminate the Software or this Agreement at any time without prior notice.
15. Language Priority
This Agreement is provided in both Chinese and English. In the event of any discrepancy, the Chinese version shall prevail.

View File

@@ -0,0 +1,712 @@
#!/usr/bin/env python3
import asyncio
import json
import subprocess
import sys
import threading
from typing import Optional, Dict, Any
import logging
import requests
import websockets
logging.getLogger("zeep").setLevel(logging.WARNING)
logging.getLogger("zeep.xsd.schema").setLevel(logging.WARNING)
logging.getLogger("zeep.xsd.schema.schema").setLevel(logging.WARNING)
from onvif import ONVIFCamera # 新增ONVIF PTZ 控制
# ======================= 独立的 PTZController =======================
class PTZController:
def __init__(self, host: str, port: int, user: str, password: str):
"""
:param host: 摄像机 IP 或域名(和 RTSP 的一样即可)
:param port: ONVIF 端口(多数为 80看你的设备
:param user: 摄像机用户名
:param password: 摄像机密码
"""
self.host = host
self.port = port
self.user = user
self.password = password
self.cam: Optional[ONVIFCamera] = None
self.media_service = None
self.ptz_service = None
self.profile = None
def connect(self) -> bool:
"""
建立 ONVIF 连接并初始化 PTZ 能力,失败返回 False不抛异常
Note: 首先 pip install onvif-zeep
"""
try:
self.cam = ONVIFCamera(self.host, self.port, self.user, self.password)
self.media_service = self.cam.create_media_service()
self.ptz_service = self.cam.create_ptz_service()
profiles = self.media_service.GetProfiles()
if not profiles:
print("[PTZ] No media profiles found on camera.", file=sys.stderr)
return False
self.profile = profiles[0]
return True
except Exception as e:
print(f"[PTZ] Failed to init ONVIF PTZ: {e}", file=sys.stderr)
return False
def _continuous_move(self, pan: float, tilt: float, zoom: float, duration: float) -> bool:
"""
连续移动一段时间(秒),之后自动停止。
此函数为阻塞模式:只有在 Stop 调用结束后,才返回 True/False。
"""
if not self.ptz_service or not self.profile:
print("[PTZ] _continuous_move: ptz_service or profile not ready", file=sys.stderr)
return False
# 进入前先强行停一下,避免前一次残留动作
self._force_stop()
req = self.ptz_service.create_type("ContinuousMove")
req.ProfileToken = self.profile.token
req.Velocity = {
"PanTilt": {"x": pan, "y": tilt},
"Zoom": {"x": zoom},
}
try:
print(f"[PTZ] ContinuousMove start: pan={pan}, tilt={tilt}, zoom={zoom}, duration={duration}", file=sys.stderr)
self.ptz_service.ContinuousMove(req)
except Exception as e:
print(f"[PTZ] ContinuousMove failed: {e}", file=sys.stderr)
return False
# 阻塞等待:这里决定“运动时间”
import time
wait_seconds = max(2 * duration, 0.0)
time.sleep(wait_seconds)
# 运动完成后强制停止
return self._force_stop()
def stop(self) -> bool:
"""
阻塞调用 Stop带重试成功 True失败 False。
"""
return self._force_stop()
# ------- 对外动作接口(给 CameraController 调用) -------
# 所有接口都为“阻塞模式”:只有在运动 + Stop 完成后才返回 True/False
def move_up(self, speed: float = 0.5, duration: float = 1.0) -> bool:
print(f"[PTZ] move_up called, speed={speed}, duration={duration}", file=sys.stderr)
return self._continuous_move(pan=0.0, tilt=+speed, zoom=0.0, duration=duration)
def move_down(self, speed: float = 0.5, duration: float = 1.0) -> bool:
print(f"[PTZ] move_down called, speed={speed}, duration={duration}", file=sys.stderr)
return self._continuous_move(pan=0.0, tilt=-speed, zoom=0.0, duration=duration)
def move_left(self, speed: float = 0.2, duration: float = 1.0) -> bool:
print(f"[PTZ] move_left called, speed={speed}, duration={duration}", file=sys.stderr)
return self._continuous_move(pan=-speed, tilt=0.0, zoom=0.0, duration=duration)
def move_right(self, speed: float = 0.2, duration: float = 1.0) -> bool:
print(f"[PTZ] move_right called, speed={speed}, duration={duration}", file=sys.stderr)
return self._continuous_move(pan=+speed, tilt=0.0, zoom=0.0, duration=duration)
# ------- 占位的变倍接口(当前设备不支持) -------
def zoom_in(self, speed: float = 0.2, duration: float = 1.0) -> bool:
"""
当前设备不支持变倍;保留方法只是避免上层调用时报错。
"""
print("[PTZ] zoom_in is disabled for this device.", file=sys.stderr)
return False
def zoom_out(self, speed: float = 0.2, duration: float = 1.0) -> bool:
"""
当前设备不支持变倍;保留方法只是避免上层调用时报错。
"""
print("[PTZ] zoom_out is disabled for this device.", file=sys.stderr)
return False
def _force_stop(self, retries: int = 3, delay: float = 0.1) -> bool:
"""
尝试多次调用 Stop作为“强制停止”手段。
:param retries: 重试次数
:param delay: 每次重试间隔(秒)
"""
if not self.ptz_service or not self.profile:
print("[PTZ] _force_stop: ptz_service or profile not ready", file=sys.stderr)
return False
import time
last_error = None
for i in range(retries):
try:
print(f"[PTZ] _force_stop: calling Stop(), attempt={i+1}", file=sys.stderr)
self.ptz_service.Stop({"ProfileToken": self.profile.token})
print("[PTZ] _force_stop: Stop() returned OK", file=sys.stderr)
return True
except Exception as e:
last_error = e
print(f"[PTZ] _force_stop: Stop() failed at attempt {i+1}: {e}", file=sys.stderr)
time.sleep(delay)
print(f"[PTZ] _force_stop: all {retries} attempts failed, last error: {last_error}", file=sys.stderr)
return False
# ======================= CameraController加入 PTZ =======================
class CameraController:
"""
Uni-Lab-OS 摄像头驱动driver 形式)
启动 Uni-Lab-OS 后,立即开始推流
- WebSocket 信令:通过 signal_backend_url 连接到后端
例如: wss://sciol.ac.cn/api/realtime/signal/host/<host_id>
- 媒体服务器:通过 rtmp_url / webrtc_api / webrtc_stream_url
当前配置为 SRS与独立 HostSimulator 独立运行脚本保持一致。
"""
def __init__(
self,
host_id: str = "demo-host",
# 1信令后端WebSocket
signal_backend_url: str = "wss://sciol.ac.cn/api/realtime/signal/host",
# 2媒体后端RTMP + WebRTC API
rtmp_url: str = "rtmp://srs.sciol.ac.cn:4499/live/camera-01",
webrtc_api: str = "https://srs.sciol.ac.cn/rtc/v1/play/",
webrtc_stream_url: str = "webrtc://srs.sciol.ac.cn:4500/live/camera-01",
camera_rtsp_url: str = "",
# 3PTZ 控制相关ONVIF
ptz_host: str = "", # 一般就是摄像头 IP比如 "192.168.31.164"
ptz_port: int = 80, # ONVIF 端口,不一定是 80按实际情况改
ptz_user: str = "", # admin
ptz_password: str = "", # admin123
):
self.host_id = host_id
self.camera_rtsp_url = camera_rtsp_url
# 拼接最终的 WebSocket URL.../host/<host_id>
signal_backend_url = signal_backend_url.rstrip("/")
if not signal_backend_url.endswith("/host"):
signal_backend_url = signal_backend_url + "/host"
self.signal_backend_url = f"{signal_backend_url}/{host_id}"
# 媒体服务器配置
self.rtmp_url = rtmp_url
self.webrtc_api = webrtc_api
self.webrtc_stream_url = webrtc_stream_url
# PTZ 控制
self.ptz_host = ptz_host
self.ptz_port = ptz_port
self.ptz_user = ptz_user
self.ptz_password = ptz_password
self._ptz: Optional[PTZController] = None
self._init_ptz_if_possible()
# 运行时状态
self._ws: Optional[object] = None
self._ffmpeg_process: Optional[subprocess.Popen] = None
self._running = False
self._loop_task: Optional[asyncio.Future] = None
# 事件循环 & 线程
self._loop: Optional[asyncio.AbstractEventLoop] = None
self._loop_thread: Optional[threading.Thread] = None
try:
self.start()
except Exception as e:
print(f"[CameraController] __init__ auto start failed: {e}", file=sys.stderr)
# ------------------------ PTZ 初始化 ------------------------
# ------------------------ PTZ 公开动作方法(一个动作一个函数) ------------------------
def ptz_move_up(self, speed: float = 0.5, duration: float = 1.0) -> bool:
print(f"[CameraController] ptz_move_up called, speed={speed}, duration={duration}")
return self._ptz.move_up(speed=speed, duration=duration)
def ptz_move_down(self, speed: float = 0.5, duration: float = 1.0) -> bool:
print(f"[CameraController] ptz_move_down called, speed={speed}, duration={duration}")
return self._ptz.move_down(speed=speed, duration=duration)
def ptz_move_left(self, speed: float = 0.2, duration: float = 1.0) -> bool:
print(f"[CameraController] ptz_move_left called, speed={speed}, duration={duration}")
return self._ptz.move_left(speed=speed, duration=duration)
def ptz_move_right(self, speed: float = 0.2, duration: float = 1.0) -> bool:
print(f"[CameraController] ptz_move_right called, speed={speed}, duration={duration}")
return self._ptz.move_right(speed=speed, duration=duration)
def zoom_in(self, speed: float = 0.2, duration: float = 1.0) -> bool:
"""
当前设备不支持变倍;保留方法只是避免上层调用时报错。
"""
print("[PTZ] zoom_in is disabled for this device.", file=sys.stderr)
return False
def zoom_out(self, speed: float = 0.2, duration: float = 1.0) -> bool:
"""
当前设备不支持变倍;保留方法只是避免上层调用时报错。
"""
print("[PTZ] zoom_out is disabled for this device.", file=sys.stderr)
return False
def ptz_stop(self):
if self._ptz is None:
print("[CameraController] PTZ not initialized.", file=sys.stderr)
return
self._ptz.stop()
def _init_ptz_if_possible(self):
"""
根据 ptz_host / user / password 初始化 PTZ
如果配置信息不全则不启用 PTZ静默
"""
if not (self.ptz_host and self.ptz_user and self.ptz_password):
return
ctrl = PTZController(
host=self.ptz_host,
port=self.ptz_port,
user=self.ptz_user,
password=self.ptz_password,
)
if ctrl.connect():
self._ptz = ctrl
else:
self._ptz = None
# ---------------------------------------------------------------------
# 对外暴露的方法:供 Uni-Lab-OS 调用
# ---------------------------------------------------------------------
def start(self, config: Optional[Dict[str, Any]] = None):
"""
启动 Camera 连接 & 消息循环,并在启动时就开启 FFmpeg 推流,
"""
if self._running:
return {"status": "already_running", "host_id": self.host_id}
# 应用 config 覆盖(如果有)
if config:
self.camera_rtsp_url = config.get("camera_rtsp_url", self.camera_rtsp_url)
cfg_host_id = config.get("host_id")
if cfg_host_id:
self.host_id = cfg_host_id
signal_backend_url = config.get("signal_backend_url")
if signal_backend_url:
signal_backend_url = signal_backend_url.rstrip("/")
if not signal_backend_url.endswith("/host"):
signal_backend_url = signal_backend_url + "/host"
self.signal_backend_url = f"{signal_backend_url}/{self.host_id}"
self.rtmp_url = config.get("rtmp_url", self.rtmp_url)
self.webrtc_api = config.get("webrtc_api", self.webrtc_api)
self.webrtc_stream_url = config.get(
"webrtc_stream_url", self.webrtc_stream_url
)
# PTZ 相关配置也允许通过 config 注入
self.ptz_host = config.get("ptz_host", self.ptz_host)
self.ptz_port = int(config.get("ptz_port", self.ptz_port))
self.ptz_user = config.get("ptz_user", self.ptz_user)
self.ptz_password = config.get("ptz_password", self.ptz_password)
self._init_ptz_if_possible()
self._running = True
# === start 时启动 FFmpeg 推流 ===
self._start_ffmpeg()
# 创建新的事件循环和线程(用于 WebSocket 信令)
self._loop = asyncio.new_event_loop()
def loop_runner(loop: asyncio.AbstractEventLoop):
asyncio.set_event_loop(loop)
try:
loop.run_forever()
except Exception as e:
print(f"[CameraController] event loop error: {e}", file=sys.stderr)
self._loop_thread = threading.Thread(
target=loop_runner, args=(self._loop,), daemon=True
)
self._loop_thread.start()
self._loop_task = asyncio.run_coroutine_threadsafe(
self._run_main_loop(), self._loop
)
return {
"status": "started",
"host_id": self.host_id,
"signal_backend_url": self.signal_backend_url,
"rtmp_url": self.rtmp_url,
"webrtc_api": self.webrtc_api,
"webrtc_stream_url": self.webrtc_stream_url,
}
def stop(self) -> Dict[str, Any]:
"""
停止推流 & 断开 WebSocket并关闭事件循环线程。
"""
self._running = False
self._stop_ffmpeg()
if self._ws and self._loop is not None:
async def close_ws():
try:
await self._ws.close()
except Exception as e:
print(
f"[CameraController] error when closing WebSocket: {e}",
file=sys.stderr,
)
asyncio.run_coroutine_threadsafe(close_ws(), self._loop)
if self._loop_task is not None:
if not self._loop_task.done():
self._loop_task.cancel()
try:
self._loop_task.result()
except asyncio.CancelledError:
pass
except Exception as e:
print(
f"[CameraController] main loop task error in stop(): {e}",
file=sys.stderr,
)
finally:
self._loop_task = None
if self._loop is not None:
try:
self._loop.call_soon_threadsafe(self._loop.stop)
except Exception as e:
print(
f"[CameraController] error when stopping event loop: {e}",
file=sys.stderr,
)
if self._loop_thread is not None:
try:
self._loop_thread.join(timeout=5)
except Exception as e:
print(
f"[CameraController] error when joining loop thread: {e}",
file=sys.stderr,
)
finally:
self._loop_thread = None
self._ws = None
self._loop = None
return {"status": "stopped", "host_id": self.host_id}
def get_status(self) -> Dict[str, Any]:
"""
查询当前状态,方便在 Uni-Lab-OS 中做监控。
"""
ws_closed = None
if self._ws is not None:
ws_closed = getattr(self._ws, "closed", None)
if ws_closed is None:
websocket_connected = self._ws is not None
else:
websocket_connected = (self._ws is not None) and (not ws_closed)
return {
"host_id": self.host_id,
"running": self._running,
"websocket_connected": websocket_connected,
"ffmpeg_running": bool(
self._ffmpeg_process and self._ffmpeg_process.poll() is None
),
"signal_backend_url": self.signal_backend_url,
"rtmp_url": self.rtmp_url,
}
# ---------------------------------------------------------------------
# 内部实现逻辑WebSocket 循环 / FFmpeg / WebRTC Offer 处理
# ---------------------------------------------------------------------
async def _run_main_loop(self):
try:
while self._running:
try:
async with websockets.connect(self.signal_backend_url) as ws:
self._ws = ws
await self._recv_loop()
except asyncio.CancelledError:
raise
except Exception as e:
if self._running:
print(
f"[CameraController] WebSocket connection error: {e}",
file=sys.stderr,
)
await asyncio.sleep(3)
except asyncio.CancelledError:
pass
async def _recv_loop(self):
assert self._ws is not None
ws = self._ws
async for message in ws:
try:
data = json.loads(message)
except json.JSONDecodeError:
print(
f"[CameraController] received non-JSON message: {message}",
file=sys.stderr,
)
continue
try:
await self._handle_message(data)
except Exception as e:
print(
f"[CameraController] error while handling message {data}: {e}",
file=sys.stderr,
)
async def _handle_message(self, data: Dict[str, Any]):
"""
处理来自信令后端的消息:
- command: start_stream / stop_stream / ptz_xxx
- type: offer (WebRTC)
"""
cmd = data.get("command")
# ---------- 推流控制 ----------
if cmd == "start_stream":
try:
self._start_ffmpeg()
except Exception as e:
print(
f"[CameraController] error when starting FFmpeg on start_stream: {e}",
file=sys.stderr,
)
return
if cmd == "stop_stream":
try:
self._stop_ffmpeg()
except Exception as e:
print(
f"[CameraController] error when stopping FFmpeg on stop_stream: {e}",
file=sys.stderr,
)
return
# # ---------- PTZ 控制 ----------
# # 例如信令可以发:
# # {"command": "ptz_move", "direction": "down", "speed": 0.5, "duration": 0.5}
# if cmd == "ptz_move":
# if self._ptz is None:
# # 没有初始化 PTZ静默忽略或打印一条
# print("[CameraController] PTZ not initialized.", file=sys.stderr)
# return
# direction = data.get("direction", "")
# speed = float(data.get("speed", 0.5))
# duration = float(data.get("duration", 0.5))
# try:
# if direction == "up":
# self._ptz.move_up(speed=speed, duration=duration)
# elif direction == "down":
# self._ptz.move_down(speed=speed, duration=duration)
# elif direction == "left":
# self._ptz.move_left(speed=speed, duration=duration)
# elif direction == "right":
# self._ptz.move_right(speed=speed, duration=duration)
# elif direction == "zoom_in":
# self._ptz.zoom_in(speed=speed, duration=duration)
# elif direction == "zoom_out":
# self._ptz.zoom_out(speed=speed, duration=duration)
# elif direction == "stop":
# self._ptz.stop()
# else:
# # 未知方向,忽略
# pass
# except Exception as e:
# print(
# f"[CameraController] error when handling PTZ move: {e}",
# file=sys.stderr,
# )
# return
# ---------- WebRTC Offer ----------
if data.get("type") == "offer":
offer_sdp = data.get("sdp", "")
camera_id = data.get("cameraId", "camera-01")
try:
answer_sdp = await self._handle_webrtc_offer(offer_sdp)
except Exception as e:
print(
f"[CameraController] error when handling WebRTC offer: {e}",
file=sys.stderr,
)
return
if self._ws:
answer_payload = {
"type": "answer",
"sdp": answer_sdp,
"cameraId": camera_id,
"hostId": self.host_id,
}
try:
await self._ws.send(json.dumps(answer_payload))
except Exception as e:
print(
f"[CameraController] error when sending WebRTC answer: {e}",
file=sys.stderr,
)
# ------------------------ FFmpeg 相关 ------------------------
def _start_ffmpeg(self):
if self._ffmpeg_process and self._ffmpeg_process.poll() is None:
return
cmd = [
"ffmpeg",
"-rtsp_transport", "tcp",
"-i", self.camera_rtsp_url,
"-c:v", "libx264",
"-preset", "ultrafast",
"-tune", "zerolatency",
"-profile:v", "baseline",
"-b:v", "1M",
"-maxrate", "1M",
"-bufsize", "2M",
"-g", "10",
"-keyint_min", "10",
"-sc_threshold", "0",
"-pix_fmt", "yuv420p",
"-x264-params", "bframes=0",
"-c:a", "aac",
"-ar", "44100",
"-ac", "1",
"-b:a", "64k",
"-f", "flv",
self.rtmp_url,
]
try:
self._ffmpeg_process = subprocess.Popen(
cmd,
stdout=subprocess.DEVNULL,
stderr=subprocess.STDOUT,
shell=False,
)
except Exception as e:
print(f"[CameraController] failed to start FFmpeg: {e}", file=sys.stderr)
self._ffmpeg_process = None
raise
def _stop_ffmpeg(self):
proc = self._ffmpeg_process
if proc and proc.poll() is None:
try:
proc.terminate()
try:
proc.wait(timeout=5)
except subprocess.TimeoutExpired:
try:
proc.kill()
try:
proc.wait(timeout=2)
except subprocess.TimeoutExpired:
print(
f"[CameraController] FFmpeg process did not exit even after kill (pid={proc.pid})",
file=sys.stderr,
)
except Exception as e:
print(
f"[CameraController] failed to kill FFmpeg process: {e}",
file=sys.stderr,
)
except Exception as e:
print(
f"[CameraController] error when stopping FFmpeg: {e}",
file=sys.stderr,
)
self._ffmpeg_process = None
# ------------------------ WebRTC Offer 相关 ------------------------
async def _handle_webrtc_offer(self, offer_sdp: str) -> str:
payload = {
"api": self.webrtc_api,
"streamurl": self.webrtc_stream_url,
"sdp": offer_sdp,
}
headers = {"Content-Type": "application/json"}
def _do_request():
return requests.post(
self.webrtc_api,
json=payload,
headers=headers,
timeout=10,
)
try:
loop = asyncio.get_running_loop()
resp = await loop.run_in_executor(None, _do_request)
except Exception as e:
print(
f"[CameraController] failed to send offer to media server: {e}",
file=sys.stderr,
)
raise
try:
resp.raise_for_status()
except Exception as e:
print(
f"[CameraController] media server HTTP error: {e}, "
f"status={resp.status_code}, body={resp.text[:200]}",
file=sys.stderr,
)
raise
try:
data = resp.json()
except Exception as e:
print(
f"[CameraController] failed to parse media server JSON: {e}, "
f"raw={resp.text[:200]}",
file=sys.stderr,
)
raise
answer_sdp = data.get("sdp", "")
if not answer_sdp:
msg = f"empty SDP from media server: {data}"
print(f"[CameraController] {msg}", file=sys.stderr)
raise RuntimeError(msg)
return answer_sdp

View File

@@ -0,0 +1,401 @@
#!/usr/bin/env python3
import asyncio
import json
import subprocess
import sys
import threading
from typing import Optional, Dict, Any
import requests
import websockets
class CameraController:
"""
Uni-Lab-OS 摄像头驱动Linux USB 摄像头版,无 PTZ
- WebSocket 信令signal_backend_url 连接到后端
例如: wss://sciol.ac.cn/api/realtime/signal/host/<host_id>
- 媒体服务器RTMP 推流到 rtmp_urlWebRTC offer 转发到 SRS 的 webrtc_api
- 视频源:本地 USB 摄像头V4L2默认 /dev/video0
"""
def __init__(
self,
host_id: str = "demo-host",
signal_backend_url: str = "wss://sciol.ac.cn/api/realtime/signal/host",
rtmp_url: str = "rtmp://srs.sciol.ac.cn:4499/live/camera-01",
webrtc_api: str = "https://srs.sciol.ac.cn/rtc/v1/play/",
webrtc_stream_url: str = "webrtc://srs.sciol.ac.cn:4500/live/camera-01",
video_device: str = "/dev/video0",
width: int = 1280,
height: int = 720,
fps: int = 30,
video_bitrate: str = "1500k",
audio_device: Optional[str] = None, # 比如 "hw:1,0",没有音频就保持 None
audio_bitrate: str = "64k",
):
self.host_id = host_id
# 拼接最终 WebSocket URL.../host/<host_id>
signal_backend_url = signal_backend_url.rstrip("/")
if not signal_backend_url.endswith("/host"):
signal_backend_url = signal_backend_url + "/host"
self.signal_backend_url = f"{signal_backend_url}/{host_id}"
# 媒体服务器配置
self.rtmp_url = rtmp_url
self.webrtc_api = webrtc_api
self.webrtc_stream_url = webrtc_stream_url
# 本地采集配置
self.video_device = video_device
self.width = int(width)
self.height = int(height)
self.fps = int(fps)
self.video_bitrate = video_bitrate
self.audio_device = audio_device
self.audio_bitrate = audio_bitrate
# 运行时状态
self._ws: Optional[object] = None
self._ffmpeg_process: Optional[subprocess.Popen] = None
self._running = False
self._loop_task: Optional[asyncio.Future] = None
# 事件循环 & 线程
self._loop: Optional[asyncio.AbstractEventLoop] = None
self._loop_thread: Optional[threading.Thread] = None
try:
self.start()
except Exception as e:
print(f"[CameraController] __init__ auto start failed: {e}", file=sys.stderr)
# ---------------------------------------------------------------------
# 对外方法
# ---------------------------------------------------------------------
def start(self, config: Optional[Dict[str, Any]] = None):
if self._running:
return {"status": "already_running", "host_id": self.host_id}
# 应用 config 覆盖(如果有)
if config:
cfg_host_id = config.get("host_id")
if cfg_host_id:
self.host_id = cfg_host_id
signal_backend_url = config.get("signal_backend_url")
if signal_backend_url:
signal_backend_url = signal_backend_url.rstrip("/")
if not signal_backend_url.endswith("/host"):
signal_backend_url = signal_backend_url + "/host"
self.signal_backend_url = f"{signal_backend_url}/{self.host_id}"
self.rtmp_url = config.get("rtmp_url", self.rtmp_url)
self.webrtc_api = config.get("webrtc_api", self.webrtc_api)
self.webrtc_stream_url = config.get("webrtc_stream_url", self.webrtc_stream_url)
self.video_device = config.get("video_device", self.video_device)
self.width = int(config.get("width", self.width))
self.height = int(config.get("height", self.height))
self.fps = int(config.get("fps", self.fps))
self.video_bitrate = config.get("video_bitrate", self.video_bitrate)
self.audio_device = config.get("audio_device", self.audio_device)
self.audio_bitrate = config.get("audio_bitrate", self.audio_bitrate)
self._running = True
print("[CameraController] start(): starting FFmpeg streaming...", file=sys.stderr)
self._start_ffmpeg()
self._loop = asyncio.new_event_loop()
def loop_runner(loop: asyncio.AbstractEventLoop):
asyncio.set_event_loop(loop)
try:
loop.run_forever()
except Exception as e:
print(f"[CameraController] event loop error: {e}", file=sys.stderr)
self._loop_thread = threading.Thread(target=loop_runner, args=(self._loop,), daemon=True)
self._loop_thread.start()
self._loop_task = asyncio.run_coroutine_threadsafe(self._run_main_loop(), self._loop)
return {
"status": "started",
"host_id": self.host_id,
"signal_backend_url": self.signal_backend_url,
"rtmp_url": self.rtmp_url,
"webrtc_api": self.webrtc_api,
"webrtc_stream_url": self.webrtc_stream_url,
"video_device": self.video_device,
"width": self.width,
"height": self.height,
"fps": self.fps,
"video_bitrate": self.video_bitrate,
"audio_device": self.audio_device,
}
def stop(self) -> Dict[str, Any]:
self._running = False
# 先取消主任务(让 ws connect/sleep 尽快退出)
if self._loop_task is not None and not self._loop_task.done():
self._loop_task.cancel()
# 停止推流
self._stop_ffmpeg()
# 关闭 WebSocket在 loop 中执行)
if self._ws and self._loop is not None:
async def close_ws():
try:
await self._ws.close()
except Exception as e:
print(f"[CameraController] error closing WebSocket: {e}", file=sys.stderr)
try:
asyncio.run_coroutine_threadsafe(close_ws(), self._loop)
except Exception:
pass
# 停止事件循环
if self._loop is not None:
try:
self._loop.call_soon_threadsafe(self._loop.stop)
except Exception as e:
print(f"[CameraController] error stopping loop: {e}", file=sys.stderr)
# 等待线程退出
if self._loop_thread is not None:
try:
self._loop_thread.join(timeout=5)
except Exception as e:
print(f"[CameraController] error joining loop thread: {e}", file=sys.stderr)
self._ws = None
self._loop_task = None
self._loop = None
self._loop_thread = None
return {"status": "stopped", "host_id": self.host_id}
def get_status(self) -> Dict[str, Any]:
ws_closed = None
if self._ws is not None:
ws_closed = getattr(self._ws, "closed", None)
if ws_closed is None:
websocket_connected = self._ws is not None
else:
websocket_connected = (self._ws is not None) and (not ws_closed)
return {
"host_id": self.host_id,
"running": self._running,
"websocket_connected": websocket_connected,
"ffmpeg_running": bool(self._ffmpeg_process and self._ffmpeg_process.poll() is None),
"signal_backend_url": self.signal_backend_url,
"rtmp_url": self.rtmp_url,
"video_device": self.video_device,
"width": self.width,
"height": self.height,
"fps": self.fps,
"video_bitrate": self.video_bitrate,
}
# ---------------------------------------------------------------------
# WebSocket / 信令
# ---------------------------------------------------------------------
async def _run_main_loop(self):
print("[CameraController] main loop started", file=sys.stderr)
try:
while self._running:
try:
async with websockets.connect(self.signal_backend_url) as ws:
self._ws = ws
print(f"[CameraController] WebSocket connected: {self.signal_backend_url}", file=sys.stderr)
await self._recv_loop()
except asyncio.CancelledError:
raise
except Exception as e:
if self._running:
print(f"[CameraController] WebSocket connection error: {e}", file=sys.stderr)
await asyncio.sleep(3)
except asyncio.CancelledError:
pass
finally:
print("[CameraController] main loop exited", file=sys.stderr)
async def _recv_loop(self):
assert self._ws is not None
ws = self._ws
async for message in ws:
try:
data = json.loads(message)
except json.JSONDecodeError:
print(f"[CameraController] non-JSON message: {message}", file=sys.stderr)
continue
try:
await self._handle_message(data)
except Exception as e:
print(f"[CameraController] error handling message {data}: {e}", file=sys.stderr)
async def _handle_message(self, data: Dict[str, Any]):
cmd = data.get("command")
if cmd == "start_stream":
self._start_ffmpeg()
return
if cmd == "stop_stream":
self._stop_ffmpeg()
return
if data.get("type") == "offer":
offer_sdp = data.get("sdp", "")
camera_id = data.get("cameraId", "camera-01")
answer_sdp = await self._handle_webrtc_offer(offer_sdp)
if self._ws:
answer_payload = {
"type": "answer",
"sdp": answer_sdp,
"cameraId": camera_id,
"hostId": self.host_id,
}
await self._ws.send(json.dumps(answer_payload))
# ---------------------------------------------------------------------
# FFmpeg 推流V4L2 USB 摄像头)
# ---------------------------------------------------------------------
def _start_ffmpeg(self):
if self._ffmpeg_process and self._ffmpeg_process.poll() is None:
return
# 兼容性优先:不强制输入像素格式;失败再通过外部调整 width/height/fps
video_size = f"{self.width}x{self.height}"
cmd = [
"ffmpeg",
"-hide_banner",
"-loglevel",
"warning",
# video input
"-f", "v4l2",
"-framerate", str(self.fps),
"-video_size", video_size,
"-i", self.video_device,
]
# optional audio input
if self.audio_device:
cmd += [
"-f", "alsa",
"-i", self.audio_device,
"-c:a", "aac",
"-b:a", self.audio_bitrate,
"-ar", "44100",
"-ac", "1",
]
else:
cmd += ["-an"]
# video encode + rtmp out
cmd += [
"-c:v", "libx264",
"-preset", "ultrafast",
"-tune", "zerolatency",
"-profile:v", "baseline",
"-pix_fmt", "yuv420p",
"-b:v", self.video_bitrate,
"-maxrate", self.video_bitrate,
"-bufsize", "2M",
"-g", str(max(self.fps, 10)),
"-keyint_min", str(max(self.fps, 10)),
"-sc_threshold", "0",
"-x264-params", "bframes=0",
"-f", "flv",
self.rtmp_url,
]
print(f"[CameraController] starting FFmpeg: {' '.join(cmd)}", file=sys.stderr)
try:
# 不再丢弃日志,至少能看到 ffmpeg 报错(调试很关键)
self._ffmpeg_process = subprocess.Popen(
cmd,
stdout=subprocess.DEVNULL,
stderr=sys.stderr,
shell=False,
)
except Exception as e:
self._ffmpeg_process = None
print(f"[CameraController] failed to start FFmpeg: {e}", file=sys.stderr)
def _stop_ffmpeg(self):
proc = self._ffmpeg_process
if proc and proc.poll() is None:
try:
proc.terminate()
try:
proc.wait(timeout=5)
except subprocess.TimeoutExpired:
proc.kill()
except Exception as e:
print(f"[CameraController] error stopping FFmpeg: {e}", file=sys.stderr)
self._ffmpeg_process = None
# ---------------------------------------------------------------------
# WebRTC offer -> SRS
# ---------------------------------------------------------------------
async def _handle_webrtc_offer(self, offer_sdp: str) -> str:
payload = {
"api": self.webrtc_api,
"streamurl": self.webrtc_stream_url,
"sdp": offer_sdp,
}
headers = {"Content-Type": "application/json"}
def _do_post():
return requests.post(self.webrtc_api, json=payload, headers=headers, timeout=10)
loop = asyncio.get_running_loop()
resp = await loop.run_in_executor(None, _do_post)
resp.raise_for_status()
data = resp.json()
answer_sdp = data.get("sdp", "")
if not answer_sdp:
raise RuntimeError(f"empty SDP from media server: {data}")
return answer_sdp
if __name__ == "__main__":
# 直接运行用于手动测试
c = CameraController(
host_id="demo-host",
video_device="/dev/video0",
width=1280,
height=720,
fps=30,
video_bitrate="1500k",
audio_device=None,
)
try:
while True:
asyncio.sleep(1)
except KeyboardInterrupt:
c.stop()

View File

@@ -0,0 +1,51 @@
#!/usr/bin/env python3
import time
import json
from cameraUSB import CameraController
def main():
# 按你的实际情况改
cfg = dict(
host_id="demo-host",
signal_backend_url="wss://sciol.ac.cn/api/realtime/signal/host",
rtmp_url="rtmp://srs.sciol.ac.cn:4499/live/camera-01",
webrtc_api="https://srs.sciol.ac.cn/rtc/v1/play/",
webrtc_stream_url="webrtc://srs.sciol.ac.cn:4500/live/camera-01",
video_device="/dev/video7",
width=1280,
height=720,
fps=30,
video_bitrate="1500k",
audio_device=None,
)
c = CameraController(**cfg)
# 可选:如果你不想依赖 __init__ 自动 start可以这样显式调用
# c = CameraController(host_id=cfg["host_id"])
# c.start(cfg)
run_seconds = 30 # 测试运行时长
t0 = time.time()
try:
while True:
st = c.get_status()
print(json.dumps(st, ensure_ascii=False, indent=2))
if time.time() - t0 >= run_seconds:
break
time.sleep(2)
except KeyboardInterrupt:
print("Interrupted, stopping...")
finally:
print("Stopping controller...")
c.stop()
print("Done.")
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,36 @@
import cv2
# 推荐把 @ 进行 URL 编码:@ -> %40
RTSP_URL = "rtsp://admin:admin123@192.168.31.164:554/stream1"
OUTPUT_IMAGE = "rtsp_test_frame.jpg"
def main():
print(f"尝试连接 RTSP 流: {RTSP_URL}")
cap = cv2.VideoCapture(RTSP_URL)
if not cap.isOpened():
print("错误:无法打开 RTSP 流,请检查:")
print(" 1. IP/端口是否正确")
print(" 2. 账号密码(尤其是 @ 是否已转成 %40是否正确")
print(" 3. 摄像头是否允许当前主机访问(同一网段、防火墙等)")
return
print("连接成功,开始读取一帧...")
ret, frame = cap.read()
if not ret or frame is None:
print("错误:已连接但未能读取到帧数据(可能是码流未开启或网络抖动)")
cap.release()
return
# 保存当前帧
success = cv2.imwrite(OUTPUT_IMAGE, frame)
cap.release()
if success:
print(f"成功截取一帧并保存为: {OUTPUT_IMAGE}")
else:
print("错误:写入图片失败,请检查磁盘权限/路径")
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,21 @@
# run_camera_push.py
import time
from cameraDriver import CameraController # 这里根据你的文件名调整
if __name__ == "__main__":
controller = CameraController(
host_id="demo-host",
signal_backend_url="wss://sciol.ac.cn/api/realtime/signal/host",
rtmp_url="rtmp://srs.sciol.ac.cn:4499/live/camera-01",
webrtc_api="https://srs.sciol.ac.cn/rtc/v1/play/",
webrtc_stream_url="webrtc://srs.sciol.ac.cn:4500/live/camera-01",
camera_rtsp_url="rtsp://admin:admin123@192.168.31.164:554/stream1",
)
try:
while True:
status = controller.get_status()
print(status)
time.sleep(5)
except KeyboardInterrupt:
controller.stop()

View File

@@ -0,0 +1,78 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
使用 CameraController 来测试 PTZ
让摄像头按顺序向下、向上、向左、向右运动几次。
"""
import time
import sys
# 根据你的工程结构修改导入路径:
# 假设 CameraController 定义在 cameraController.py 里
from cameraDriver import CameraController
def main():
# === 根据你的实际情况填 IP、端口、账号密码 ===
ptz_host = "192.168.31.164"
ptz_port = 2020 # 注意要和你单独测试 PTZController 时保持一致
ptz_user = "admin"
ptz_password = "admin123"
# 1. 创建 CameraController 实例
cam = CameraController(
# 其他摄像机相关参数按你类的 __init__ 来补充
ptz_host=ptz_host,
ptz_port=ptz_port,
ptz_user=ptz_user,
ptz_password=ptz_password,
)
# 2. 启动 / 初始化(如果你的 CameraController 有 start(config) 之类的接口)
# 这里给一个最小的 config重点是 PTZ 相关字段
config = {
"ptz_host": ptz_host,
"ptz_port": ptz_port,
"ptz_user": ptz_user,
"ptz_password": ptz_password,
}
try:
cam.start(config)
except Exception as e:
print(f"[TEST] CameraController start() 失败: {e}", file=sys.stderr)
return
# 这里可以判断一下内部 _ptz 是否初始化成功(如果你对 CameraController 做了封装)
if getattr(cam, "_ptz", None) is None:
print("[TEST] CameraController 内部 PTZ 未初始化成功,请检查 ptz_host/port/user/password 配置。", file=sys.stderr)
return
# 3. 依次调用 CameraController 的 PTZ 方法
# 这里假设你在 CameraController 中提供了这几个对外方法:
# ptz_move_down / ptz_move_up / ptz_move_left / ptz_move_right
# 如果你命名不一样,把下面调用名改成你的即可。
print("向下移动(通过 CameraController...")
cam.ptz_move_down(speed=0.5, duration=1.0)
time.sleep(1)
print("向上移动(通过 CameraController...")
cam.ptz_move_up(speed=0.5, duration=1.0)
time.sleep(1)
print("向左移动(通过 CameraController...")
cam.ptz_move_left(speed=0.5, duration=1.0)
time.sleep(1)
print("向右移动(通过 CameraController...")
cam.ptz_move_right(speed=0.5, duration=1.0)
time.sleep(1)
print("测试结束。")
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,50 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
测试 cameraDriver.py中的 PTZController 类,让摄像头按顺序运动几次
"""
import time
from cameraDriver import PTZController
def main():
# 根据你的实际情况填 IP、端口、账号密码
host = "192.168.31.164"
port = 80
user = "admin"
password = "admin123"
ptz = PTZController(host=host, port=port, user=user, password=password)
# 1. 连接摄像头
if not ptz.connect():
print("连接 PTZ 失败,检查 IP/用户名/密码/端口。")
return
# 2. 依次测试几个动作
# 每个动作之间 sleep 一下方便观察
print("向下移动...")
ptz.move_down(speed=0.5, duration=1.0)
time.sleep(1)
print("向上移动...")
ptz.move_up(speed=0.5, duration=1.0)
time.sleep(1)
print("向左移动...")
ptz.move_left(speed=0.5, duration=1.0)
time.sleep(1)
print("向右移动...")
ptz.move_right(speed=0.5, duration=1.0)
time.sleep(1)
print("测试结束。")
if __name__ == "__main__":
main()

View File

@@ -1,296 +0,0 @@
# -*- coding: utf-8 -*-
import serial
import time
import csv
import threading
import os
from collections import deque
from typing import Dict, Any, Optional
from pylabrobot.resources import Deck
from unilabos.devices.workstation.workstation_base import WorkstationBase
class ElectrolysisWaterPlatform(WorkstationBase):
"""
电解水平台工作站
基于 WorkstationBase 的电解水实验平台,支持串口通信和数据采集
"""
def __init__(
self,
deck: Deck,
port: str = "COM10",
baudrate: int = 115200,
csv_path: Optional[str] = None,
timeout: float = 0.2,
**kwargs
):
super().__init__(deck, **kwargs)
# ========== 配置 ==========
self.port = port
self.baudrate = baudrate
# 如果没有指定路径,默认保存在代码文件所在目录
if csv_path is None:
current_dir = os.path.dirname(os.path.abspath(__file__))
self.csv_path = os.path.join(current_dir, "stm32_data.csv")
else:
self.csv_path = csv_path
self.ser_timeout = timeout
self.chunk_read = 128
# 串口对象
self.ser: Optional[serial.Serial] = None
self.stop_flag = False
# 线程对象
self.rx_thread: Optional[threading.Thread] = None
self.tx_thread: Optional[threading.Thread] = None
# ==== 接收(下位机->上位机):固定 1+13+1 = 15 字节 ====
self.RX_HEAD = 0x3E
self.RX_TAIL = 0x3E
self.RX_FRAME_LEN = 1 + 13 + 1 # 15
# ==== 发送(上位机->下位机):固定 1+9+1 = 11 字节 ====
self.TX_HEAD = 0x3E
self.TX_TAIL = 0xE3 # 协议图中标注 E3 作为帧尾
self.TX_FRAME_LEN = 1 + 9 + 1 # 11
def open_serial(self, port: Optional[str] = None, baudrate: Optional[int] = None, timeout: Optional[float] = None) -> Optional[serial.Serial]:
"""打开串口"""
port = port or self.port
baudrate = baudrate or self.baudrate
timeout = timeout or self.ser_timeout
try:
ser = serial.Serial(port, baudrate, timeout=timeout)
print(f"[OK] 串口 {port} 已打开,波特率 {baudrate}")
ser.reset_input_buffer()
ser.reset_output_buffer()
self.ser = ser
return ser
except serial.SerialException as e:
print(f"[ERR] 无法打开串口 {port}: {e}")
return None
def close_serial(self):
"""关闭串口"""
if self.ser and self.ser.is_open:
self.ser.close()
print("[INFO] 串口已关闭")
@staticmethod
def u16_be(h: int, l: int) -> int:
"""将两个字节组合成16位无符号整数大端序"""
return ((h & 0xFF) << 8) | (l & 0xFF)
@staticmethod
def split_u16_be(val: int) -> tuple:
"""返回 (高字节, 低字节),输入会夹到 0..65535"""
v = int(max(0, min(65535, int(val))))
return (v >> 8) & 0xFF, v & 0xFF
# ================== 接收固定15字节 ==================
def parse_rx_payload(self, dat13: bytes) -> Optional[Dict[str, Any]]:
"""解析 13 字节数据区(下位机发送到上位机)"""
if len(dat13) != 13:
return None
current_mA = self.u16_be(dat13[0], dat13[1])
voltage_mV = self.u16_be(dat13[2], dat13[3])
temperature_raw = self.u16_be(dat13[4], dat13[5])
tds_ppm = self.u16_be(dat13[6], dat13[7])
gas_sccm = self.u16_be(dat13[8], dat13[9])
liquid_mL = self.u16_be(dat13[10], dat13[11])
ph_raw = dat13[12] & 0xFF
return {
"Current_mA": current_mA,
"Voltage_mV": voltage_mV,
"Temperature_C": round(temperature_raw / 100.0, 2),
"TDS_ppm": tds_ppm,
"GasFlow_sccm": gas_sccm,
"LiquidFlow_mL": liquid_mL,
"pH": round(ph_raw / 10.0, 2)
}
def try_parse_rx_frame(self, frame15: bytes) -> Optional[Dict[str, Any]]:
"""尝试解析接收帧"""
if len(frame15) != self.RX_FRAME_LEN:
return None
if frame15[0] != self.RX_HEAD or frame15[-1] != self.RX_TAIL:
return None
return self.parse_rx_payload(frame15[1:-1])
def rx_thread_fn(self):
"""接收线程函数"""
headers = ["Timestamp", "Current_mA", "Voltage_mV",
"Temperature_C", "TDS_ppm", "GasFlow_sccm", "LiquidFlow_mL", "pH"]
new_file = not os.path.exists(self.csv_path)
f = open(self.csv_path, mode='a', newline='', encoding='utf-8')
writer = csv.writer(f)
if new_file:
writer.writerow(headers)
f.flush()
buf = deque(maxlen=8192)
print(f"[RX] 开始接收(帧长 {self.RX_FRAME_LEN} 字节);写入:{self.csv_path}")
try:
while not self.stop_flag and self.ser and self.ser.is_open:
chunk = self.ser.read(self.chunk_read)
if chunk:
buf.extend(chunk)
while True:
# 找帧头
try:
start = next(i for i, b in enumerate(buf) if b == self.RX_HEAD)
except StopIteration:
buf.clear()
break
if start > 0:
for _ in range(start):
buf.popleft()
if len(buf) < self.RX_FRAME_LEN:
break
candidate = bytes([buf[i] for i in range(self.RX_FRAME_LEN)])
if candidate[-1] == self.RX_TAIL:
parsed = self.try_parse_rx_frame(candidate)
for _ in range(self.RX_FRAME_LEN):
buf.popleft()
if parsed:
ts = time.strftime("%Y-%m-%d %H:%M:%S")
row = [ts,
parsed["Current_mA"], parsed["Voltage_mV"],
parsed["Temperature_C"], parsed["TDS_ppm"],
parsed["GasFlow_sccm"], parsed["LiquidFlow_mL"],
parsed["pH"]]
writer.writerow(row)
f.flush()
# 若不想打印可注释下一行
# print(f"[{ts}] I={parsed['Current_mA']} mA, V={parsed['Voltage_mV']} mV, "
# f"T={parsed['Temperature_C']} °C, TDS={parsed['TDS_ppm']}, "
# f"Gas={parsed['GasFlow_sccm']} sccm, Liq={parsed['LiquidFlow_mL']} mL, pH={parsed['pH']}")
else:
# 头不变尾不对丢1字节继续对齐
buf.popleft()
else:
time.sleep(0.01)
finally:
f.close()
print("[RX] 接收线程退出CSV 已关闭")
# ================== 发送固定11字节 ==================
def build_tx_frame(self, mode: int, current_ma: int, voltage_mv: int, temp_c: float, ki: float, pump_percent: float) -> bytes:
"""
发送帧HEAD + [mode, I_hi, I_lo, V_hi, V_lo, T_hi, T_lo, Ki_byte, Pump_byte] + TAIL
- mode: 0=恒压, 1=恒流
- current_ma: mA (0..65535)
- voltage_mv: mV (0..65535)
- temp_c: ℃,将 *100 后拆分为高/低字节
- ki: 0.0..20.0 -> byte = round(ki * 10) 夹到 0..200
- pump_percent: 0..100 -> byte = round(pump * 2) 夹到 0..200
"""
mode_b = 1 if int(mode) == 1 else 0
i_hi, i_lo = self.split_u16_be(current_ma)
v_hi, v_lo = self.split_u16_be(voltage_mv)
t100 = int(round(float(temp_c) * 100.0))
t_hi, t_lo = self.split_u16_be(t100)
ki_b = int(max(0, min(200, round(float(ki) * 10))))
pump_b = int(max(0, min(200, round(float(pump_percent) * 2))))
return bytes((
self.TX_HEAD,
mode_b,
i_hi, i_lo,
v_hi, v_lo,
t_hi, t_lo,
ki_b,
pump_b,
self.TX_TAIL
))
def tx_thread_fn(self):
"""
发送线程函数
用户输入 6 个用逗号分隔的数值:
mode,current_mA,voltage_mV,set_temp_C,Ki,pump_percent
例如: 0,1000,500,0,0,50
"""
print("\n输入 6 个值(用英文逗号分隔),顺序为:")
print("mode,current_mA,voltage_mV,set_temp_C,Ki,pump_percent")
print("示例恒压0,500,1000,25,0,100 stop 结束)\n")
print("示例恒流1,1000,500,25,0,100 stop 结束)\n")
print("示例恒流1,2000,500,25,0,100 stop 结束)\n")
# 1,2000,500,25,0,100
while not self.stop_flag and self.ser and self.ser.is_open:
try:
line = input(">>> ").strip()
except EOFError:
self.stop_flag = True
break
if not line:
continue
if line.lower() == "stop":
self.stop_flag = True
print("[SYS] 停止程序")
break
try:
parts = [p.strip() for p in line.split(",")]
if len(parts) != 6:
raise ValueError("需要 6 个逗号分隔的数值")
mode = int(parts[0])
i_ma = int(float(parts[1]))
v_mv = int(float(parts[2]))
t_c = float(parts[3])
ki = float(parts[4])
pump = float(parts[5])
frame = self.build_tx_frame(mode, i_ma, v_mv, t_c, ki, pump)
self.ser.write(frame)
print("[TX]", " ".join(f"{b:02X}" for b in frame))
except Exception as e:
print("[TX] 输入/打包失败:", e)
print("格式mode,current_mA,voltage_mV,set_temp_C,Ki,pump_percent")
continue
def start(self):
"""启动电解水平台"""
self.ser = self.open_serial()
if self.ser:
try:
self.rx_thread = threading.Thread(target=self.rx_thread_fn, daemon=True)
self.tx_thread = threading.Thread(target=self.tx_thread_fn, daemon=True)
self.rx_thread.start()
self.tx_thread.start()
print("[INFO] 电解水平台已启动")
self.tx_thread.join() # 等待用户输入线程结束(输入 stop
finally:
self.close_serial()
def stop(self):
"""停止电解水平台"""
self.stop_flag = True
if self.rx_thread and self.rx_thread.is_alive():
self.rx_thread.join(timeout=2.0)
if self.tx_thread and self.tx_thread.is_alive():
self.tx_thread.join(timeout=2.0)
self.close_serial()
print("[INFO] 电解水平台已停止")
# ================== 主入口 ==================
if __name__ == "__main__":
# 创建一个简单的 Deck 用于测试
from pylabrobot.resources import Deck
deck = Deck()
platform = ElectrolysisWaterPlatform(deck)
platform.start()

View File

@@ -1,307 +0,0 @@
"""
LaiYu_Liquid 液体处理工作站集成模块
该模块提供了 LaiYu_Liquid 工作站与 UniLabOS 的完整集成,包括:
- 硬件后端和抽象接口
- 资源定义和管理
- 协议执行和液体传输
- 工作台配置和布局
主要组件:
- LaiYuLiquidBackend: 硬件后端实现
- LaiYuLiquid: 液体处理器抽象接口
- 各种资源类:枪头架、板、容器等
- 便捷创建函数和配置管理
使用示例:
from unilabos.devices.laiyu_liquid import (
LaiYuLiquid,
LaiYuLiquidBackend,
create_standard_deck,
create_tip_rack_1000ul
)
# 创建后端和液体处理器
backend = LaiYuLiquidBackend()
lh = LaiYuLiquid(backend=backend)
# 创建工作台
deck = create_standard_deck()
lh.deck = deck
# 设置和运行
await lh.setup()
"""
# 版本信息
__version__ = "1.0.0"
__author__ = "LaiYu_Liquid Integration Team"
__description__ = "LaiYu_Liquid 液体处理工作站 UniLabOS 集成模块"
# 驱动程序导入
from .drivers import (
XYZStepperController,
SOPAPipette,
MotorAxis,
MotorStatus,
SOPAConfig,
SOPAStatusCode,
StepperMotorDriver
)
# 控制器导入
from .controllers import (
XYZController,
PipetteController,
)
# 后端导入
from .backend.rviz_backend import (
LiquidHandlerRvizBackend,
)
# 资源类和创建函数导入
from .core.laiyu_liquid_res import (
LaiYuLiquidDeck,
LaiYuLiquidContainer,
LaiYuLiquidTipRack
)
# 主设备类和配置
from .core.laiyu_liquid_main import (
LaiYuLiquid,
LaiYuLiquidConfig,
LaiYuLiquidDeck,
LaiYuLiquidContainer,
LaiYuLiquidTipRack,
create_quick_setup
)
# 后端创建函数导入
from .backend import (
LaiYuLiquidBackend,
create_laiyu_backend,
)
# 导出所有公共接口
__all__ = [
# 版本信息
"__version__",
"__author__",
"__description__",
# 驱动程序
"SOPAPipette",
"SOPAConfig",
"StepperMotorDriver",
"XYZStepperController",
# 控制器
"PipetteController",
"XYZController",
# 后端
"LiquidHandlerRvizBackend",
# 资源创建函数
"create_tip_rack_1000ul",
"create_tip_rack_200ul",
"create_96_well_plate",
"create_deep_well_plate",
"create_8_tube_rack",
"create_standard_deck",
"create_waste_container",
"create_wash_container",
"create_reagent_container",
"load_deck_config",
# 后端创建函数
"create_laiyu_backend",
# 主要类
"LaiYuLiquid",
"LaiYuLiquidConfig",
"LaiYuLiquidBackend",
"LaiYuLiquidDeck",
# 工具函数
"get_version",
"get_supported_resources",
"create_quick_setup",
"validate_installation",
"print_module_info",
"setup_logging",
]
# 别名定义,为了向后兼容
LaiYuLiquidDevice = LaiYuLiquid # 主设备类别名
LaiYuLiquidController = XYZController # 控制器别名
LaiYuLiquidDriver = XYZStepperController # 驱动器别名
# 模块级别的便捷函数
def get_version() -> str:
"""
获取模块版本
Returns:
str: 版本号
"""
return __version__
def get_supported_resources() -> dict:
"""
获取支持的资源类型
Returns:
dict: 支持的资源类型字典
"""
return {
"tip_racks": {
"LaiYuLiquidTipRack": LaiYuLiquidTipRack,
},
"containers": {
"LaiYuLiquidContainer": LaiYuLiquidContainer,
},
"decks": {
"LaiYuLiquidDeck": LaiYuLiquidDeck,
},
"devices": {
"LaiYuLiquid": LaiYuLiquid,
}
}
def create_quick_setup() -> tuple:
"""
快速创建基本设置
Returns:
tuple: (backend, controllers, resources) 的元组
"""
# 创建后端
backend = LiquidHandlerRvizBackend()
# 创建控制器(使用默认端口进行演示)
pipette_controller = PipetteController(port="/dev/ttyUSB0", address=4)
xyz_controller = XYZController(port="/dev/ttyUSB1", auto_connect=False)
# 创建测试资源
tip_rack_1000 = create_tip_rack_1000ul("tip_rack_1000")
tip_rack_200 = create_tip_rack_200ul("tip_rack_200")
well_plate = create_96_well_plate("96_well_plate")
controllers = {
'pipette': pipette_controller,
'xyz': xyz_controller
}
resources = {
'tip_rack_1000': tip_rack_1000,
'tip_rack_200': tip_rack_200,
'well_plate': well_plate
}
return backend, controllers, resources
def validate_installation() -> bool:
"""
验证模块安装是否正确
Returns:
bool: 安装是否正确
"""
try:
# 检查核心类是否可以导入
from .core.laiyu_liquid_main import LaiYuLiquid, LaiYuLiquidConfig
from .backend import LaiYuLiquidBackend
from .controllers import XYZController, PipetteController
from .drivers import XYZStepperController, SOPAPipette
# 尝试创建基本对象
config = LaiYuLiquidConfig()
backend = create_laiyu_backend("validation_test")
print("模块安装验证成功")
return True
except Exception as e:
print(f"模块安装验证失败: {e}")
return False
def print_module_info():
"""打印模块信息"""
print(f"LaiYu_Liquid 集成模块")
print(f"版本: {__version__}")
print(f"作者: {__author__}")
print(f"描述: {__description__}")
print(f"")
print(f"支持的资源类型:")
resources = get_supported_resources()
for category, types in resources.items():
print(f" {category}:")
for type_name, type_class in types.items():
print(f" - {type_name}: {type_class.__name__}")
print(f"")
print(f"主要功能:")
print(f" - 硬件集成: LaiYuLiquidBackend")
print(f" - 抽象接口: LaiYuLiquid")
print(f" - 资源管理: 各种资源类和创建函数")
print(f" - 协议执行: transfer_liquid 和相关函数")
print(f" - 配置管理: deck.json 和加载函数")
# 模块初始化时的检查
def _check_dependencies():
"""检查依赖项"""
try:
import pylabrobot
import asyncio
import json
import logging
return True
except ImportError as e:
import logging
logging.warning(f"缺少依赖项 {e}")
return False
# 执行依赖检查
_dependencies_ok = _check_dependencies()
if not _dependencies_ok:
import logging
logging.warning("某些依赖项缺失,模块功能可能受限")
# 模块级别的日志配置
import logging
def setup_logging(level: str = "INFO"):
"""
设置模块日志
Args:
level: 日志级别 (DEBUG, INFO, WARNING, ERROR)
"""
logger = logging.getLogger("LaiYu_Liquid")
logger.setLevel(getattr(logging, level.upper()))
if not logger.handlers:
handler = logging.StreamHandler()
formatter = logging.Formatter(
'%(asctime)s - %(name)s - %(levelname)s - %(message)s'
)
handler.setFormatter(formatter)
logger.addHandler(handler)
return logger
# 默认日志设置
_logger = setup_logging()

View File

@@ -1,9 +0,0 @@
"""
LaiYu液体处理设备后端模块
提供设备后端接口和实现
"""
from .laiyu_backend import LaiYuLiquidBackend, create_laiyu_backend
__all__ = ['LaiYuLiquidBackend', 'create_laiyu_backend']

View File

@@ -1,334 +0,0 @@
"""
LaiYu液体处理设备后端实现
提供设备的后端接口和控制逻辑
"""
import logging
from typing import Dict, Any, Optional, List
from abc import ABC, abstractmethod
# 尝试导入PyLabRobot后端
try:
from pylabrobot.liquid_handling.backends import LiquidHandlerBackend
PYLABROBOT_AVAILABLE = True
except ImportError:
PYLABROBOT_AVAILABLE = False
# 创建模拟后端基类
class LiquidHandlerBackend:
def __init__(self, name: str):
self.name = name
self.is_connected = False
def connect(self):
"""连接设备"""
pass
def disconnect(self):
"""断开连接"""
pass
class LaiYuLiquidBackend(LiquidHandlerBackend):
"""LaiYu液体处理设备后端"""
def __init__(self, name: str = "LaiYu_Liquid_Backend"):
"""
初始化LaiYu液体处理设备后端
Args:
name: 后端名称
"""
if PYLABROBOT_AVAILABLE:
# PyLabRobot 的 LiquidHandlerBackend 不接受参数
super().__init__()
else:
# 模拟版本接受 name 参数
super().__init__(name)
self.name = name
self.logger = logging.getLogger(__name__)
self.is_connected = False
self.device_info = {
"name": "LaiYu液体处理设备",
"version": "1.0.0",
"manufacturer": "LaiYu",
"model": "LaiYu_Liquid_Handler"
}
def connect(self) -> bool:
"""
连接到LaiYu液体处理设备
Returns:
bool: 连接是否成功
"""
try:
self.logger.info("正在连接到LaiYu液体处理设备...")
# 这里应该实现实际的设备连接逻辑
# 目前返回模拟连接成功
self.is_connected = True
self.logger.info("成功连接到LaiYu液体处理设备")
return True
except Exception as e:
self.logger.error(f"连接LaiYu液体处理设备失败: {e}")
self.is_connected = False
return False
def disconnect(self) -> bool:
"""
断开与LaiYu液体处理设备的连接
Returns:
bool: 断开连接是否成功
"""
try:
self.logger.info("正在断开与LaiYu液体处理设备的连接...")
# 这里应该实现实际的设备断开连接逻辑
self.is_connected = False
self.logger.info("成功断开与LaiYu液体处理设备的连接")
return True
except Exception as e:
self.logger.error(f"断开LaiYu液体处理设备连接失败: {e}")
return False
def is_device_connected(self) -> bool:
"""
检查设备是否已连接
Returns:
bool: 设备是否已连接
"""
return self.is_connected
def get_device_info(self) -> Dict[str, Any]:
"""
获取设备信息
Returns:
Dict[str, Any]: 设备信息字典
"""
return self.device_info.copy()
def home_device(self) -> bool:
"""
设备归零操作
Returns:
bool: 归零是否成功
"""
if not self.is_connected:
self.logger.error("设备未连接,无法执行归零操作")
return False
try:
self.logger.info("正在执行设备归零操作...")
# 这里应该实现实际的设备归零逻辑
self.logger.info("设备归零操作完成")
return True
except Exception as e:
self.logger.error(f"设备归零操作失败: {e}")
return False
def aspirate(self, volume: float, location: Dict[str, Any]) -> bool:
"""
吸液操作
Args:
volume: 吸液体积 (微升)
location: 吸液位置信息
Returns:
bool: 吸液是否成功
"""
if not self.is_connected:
self.logger.error("设备未连接,无法执行吸液操作")
return False
try:
self.logger.info(f"正在执行吸液操作: 体积={volume}μL, 位置={location}")
# 这里应该实现实际的吸液逻辑
self.logger.info("吸液操作完成")
return True
except Exception as e:
self.logger.error(f"吸液操作失败: {e}")
return False
def dispense(self, volume: float, location: Dict[str, Any]) -> bool:
"""
排液操作
Args:
volume: 排液体积 (微升)
location: 排液位置信息
Returns:
bool: 排液是否成功
"""
if not self.is_connected:
self.logger.error("设备未连接,无法执行排液操作")
return False
try:
self.logger.info(f"正在执行排液操作: 体积={volume}μL, 位置={location}")
# 这里应该实现实际的排液逻辑
self.logger.info("排液操作完成")
return True
except Exception as e:
self.logger.error(f"排液操作失败: {e}")
return False
def pick_up_tip(self, location: Dict[str, Any]) -> bool:
"""
取枪头操作
Args:
location: 枪头位置信息
Returns:
bool: 取枪头是否成功
"""
if not self.is_connected:
self.logger.error("设备未连接,无法执行取枪头操作")
return False
try:
self.logger.info(f"正在执行取枪头操作: 位置={location}")
# 这里应该实现实际的取枪头逻辑
self.logger.info("取枪头操作完成")
return True
except Exception as e:
self.logger.error(f"取枪头操作失败: {e}")
return False
def drop_tip(self, location: Dict[str, Any]) -> bool:
"""
丢弃枪头操作
Args:
location: 丢弃位置信息
Returns:
bool: 丢弃枪头是否成功
"""
if not self.is_connected:
self.logger.error("设备未连接,无法执行丢弃枪头操作")
return False
try:
self.logger.info(f"正在执行丢弃枪头操作: 位置={location}")
# 这里应该实现实际的丢弃枪头逻辑
self.logger.info("丢弃枪头操作完成")
return True
except Exception as e:
self.logger.error(f"丢弃枪头操作失败: {e}")
return False
def move_to(self, location: Dict[str, Any]) -> bool:
"""
移动到指定位置
Args:
location: 目标位置信息
Returns:
bool: 移动是否成功
"""
if not self.is_connected:
self.logger.error("设备未连接,无法执行移动操作")
return False
try:
self.logger.info(f"正在移动到位置: {location}")
# 这里应该实现实际的移动逻辑
self.logger.info("移动操作完成")
return True
except Exception as e:
self.logger.error(f"移动操作失败: {e}")
return False
def get_status(self) -> Dict[str, Any]:
"""
获取设备状态
Returns:
Dict[str, Any]: 设备状态信息
"""
return {
"connected": self.is_connected,
"device_info": self.device_info,
"status": "ready" if self.is_connected else "disconnected"
}
# PyLabRobot 抽象方法实现
def stop(self):
"""停止所有操作"""
self.logger.info("停止所有操作")
pass
@property
def num_channels(self) -> int:
"""返回通道数量"""
return 1 # 单通道移液器
def can_pick_up_tip(self, tip_rack, tip_position) -> bool:
"""检查是否可以拾取吸头"""
return True # 简化实现总是返回True
def pick_up_tips(self, tip_rack, tip_positions):
"""拾取多个吸头"""
self.logger.info(f"拾取吸头: {tip_positions}")
pass
def drop_tips(self, tip_rack, tip_positions):
"""丢弃多个吸头"""
self.logger.info(f"丢弃吸头: {tip_positions}")
pass
def pick_up_tips96(self, tip_rack):
"""拾取96个吸头"""
self.logger.info("拾取96个吸头")
pass
def drop_tips96(self, tip_rack):
"""丢弃96个吸头"""
self.logger.info("丢弃96个吸头")
pass
def aspirate96(self, volume, plate, well_positions):
"""96通道吸液"""
self.logger.info(f"96通道吸液: 体积={volume}")
pass
def dispense96(self, volume, plate, well_positions):
"""96通道排液"""
self.logger.info(f"96通道排液: 体积={volume}")
pass
def pick_up_resource(self, resource, location):
"""拾取资源"""
self.logger.info(f"拾取资源: {resource}")
pass
def drop_resource(self, resource, location):
"""放置资源"""
self.logger.info(f"放置资源: {resource}")
pass
def move_picked_up_resource(self, resource, location):
"""移动已拾取的资源"""
self.logger.info(f"移动资源: {resource}{location}")
pass
def create_laiyu_backend(name: str = "LaiYu_Liquid_Backend") -> LaiYuLiquidBackend:
"""
创建LaiYu液体处理设备后端实例
Args:
name: 后端名称
Returns:
LaiYuLiquidBackend: 后端实例
"""
return LaiYuLiquidBackend(name)

View File

@@ -1,209 +0,0 @@
import json
from typing import List, Optional, Union
from pylabrobot.liquid_handling.backends.backend import (
LiquidHandlerBackend,
)
from pylabrobot.liquid_handling.standard import (
Drop,
DropTipRack,
MultiHeadAspirationContainer,
MultiHeadAspirationPlate,
MultiHeadDispenseContainer,
MultiHeadDispensePlate,
Pickup,
PickupTipRack,
ResourceDrop,
ResourceMove,
ResourcePickup,
SingleChannelAspiration,
SingleChannelDispense,
)
from pylabrobot.resources import Resource, Tip
import rclpy
from rclpy.node import Node
from sensor_msgs.msg import JointState
import time
from rclpy.action import ActionClient
from unilabos_msgs.action import SendCmd
import re
from unilabos.devices.ros_dev.liquid_handler_joint_publisher import JointStatePublisher
class LiquidHandlerRvizBackend(LiquidHandlerBackend):
"""Chatter box backend for device-free testing. Prints out all operations."""
_pip_length = 5
_vol_length = 8
_resource_length = 20
_offset_length = 16
_flow_rate_length = 10
_blowout_length = 10
_lld_z_length = 10
_kwargs_length = 15
_tip_type_length = 12
_max_volume_length = 16
_fitting_depth_length = 20
_tip_length_length = 16
# _pickup_method_length = 20
_filter_length = 10
def __init__(self, num_channels: int = 8):
"""Initialize a chatter box backend."""
super().__init__()
self._num_channels = num_channels
# rclpy.init()
if not rclpy.ok():
rclpy.init()
self.joint_state_publisher = None
async def setup(self):
self.joint_state_publisher = JointStatePublisher()
await super().setup()
async def stop(self):
pass
def serialize(self) -> dict:
return {**super().serialize(), "num_channels": self.num_channels}
@property
def num_channels(self) -> int:
return self._num_channels
async def assigned_resource_callback(self, resource: Resource):
pass
async def unassigned_resource_callback(self, name: str):
pass
async def pick_up_tips(self, ops: List[Pickup], use_channels: List[int], **backend_kwargs):
for op, channel in zip(ops, use_channels):
offset = f"{round(op.offset.x, 1)},{round(op.offset.y, 1)},{round(op.offset.z, 1)}"
row = (
f" p{channel}: "
f"{op.resource.name[-30:]:<{LiquidHandlerRvizBackend._resource_length}} "
f"{offset:<{LiquidHandlerRvizBackend._offset_length}} "
f"{op.tip.__class__.__name__:<{LiquidHandlerRvizBackend._tip_type_length}} "
f"{op.tip.maximal_volume:<{LiquidHandlerRvizBackend._max_volume_length}} "
f"{op.tip.fitting_depth:<{LiquidHandlerRvizBackend._fitting_depth_length}} "
f"{op.tip.total_tip_length:<{LiquidHandlerRvizBackend._tip_length_length}} "
# f"{str(op.tip.pickup_method)[-20:]:<{ChatterboxBackend._pickup_method_length}} "
f"{'Yes' if op.tip.has_filter else 'No':<{LiquidHandlerRvizBackend._filter_length}}"
)
coordinate = ops[0].resource.get_absolute_location(x="c",y="c")
x = coordinate.x
y = coordinate.y
z = coordinate.z + 70
self.joint_state_publisher.send_resource_action(ops[0].resource.name, x, y, z, "pick")
# goback()
async def drop_tips(self, ops: List[Drop], use_channels: List[int], **backend_kwargs):
coordinate = ops[0].resource.get_absolute_location(x="c",y="c")
x = coordinate.x
y = coordinate.y
z = coordinate.z + 70
self.joint_state_publisher.send_resource_action(ops[0].resource.name, x, y, z, "drop_trash")
# goback()
async def aspirate(
self,
ops: List[SingleChannelAspiration],
use_channels: List[int],
**backend_kwargs,
):
# 执行吸液操作
pass
for o, p in zip(ops, use_channels):
offset = f"{round(o.offset.x, 1)},{round(o.offset.y, 1)},{round(o.offset.z, 1)}"
row = (
f" p{p}: "
f"{o.volume:<{LiquidHandlerRvizBackend._vol_length}} "
f"{o.resource.name[-20:]:<{LiquidHandlerRvizBackend._resource_length}} "
f"{offset:<{LiquidHandlerRvizBackend._offset_length}} "
f"{str(o.flow_rate):<{LiquidHandlerRvizBackend._flow_rate_length}} "
f"{str(o.blow_out_air_volume):<{LiquidHandlerRvizBackend._blowout_length}} "
f"{str(o.liquid_height):<{LiquidHandlerRvizBackend._lld_z_length}} "
# f"{o.liquids if o.liquids is not None else 'none'}"
)
for key, value in backend_kwargs.items():
if isinstance(value, list) and all(isinstance(v, bool) for v in value):
value = "".join("T" if v else "F" for v in value)
if isinstance(value, list):
value = "".join(map(str, value))
row += f" {value:<15}"
coordinate = ops[0].resource.get_absolute_location(x="c",y="c")
x = coordinate.x
y = coordinate.y
z = coordinate.z + 70
self.joint_state_publisher.send_resource_action(ops[0].resource.name, x, y, z, "")
async def dispense(
self,
ops: List[SingleChannelDispense],
use_channels: List[int],
**backend_kwargs,
):
for o, p in zip(ops, use_channels):
offset = f"{round(o.offset.x, 1)},{round(o.offset.y, 1)},{round(o.offset.z, 1)}"
row = (
f" p{p}: "
f"{o.volume:<{LiquidHandlerRvizBackend._vol_length}} "
f"{o.resource.name[-20:]:<{LiquidHandlerRvizBackend._resource_length}} "
f"{offset:<{LiquidHandlerRvizBackend._offset_length}} "
f"{str(o.flow_rate):<{LiquidHandlerRvizBackend._flow_rate_length}} "
f"{str(o.blow_out_air_volume):<{LiquidHandlerRvizBackend._blowout_length}} "
f"{str(o.liquid_height):<{LiquidHandlerRvizBackend._lld_z_length}} "
# f"{o.liquids if o.liquids is not None else 'none'}"
)
for key, value in backend_kwargs.items():
if isinstance(value, list) and all(isinstance(v, bool) for v in value):
value = "".join("T" if v else "F" for v in value)
if isinstance(value, list):
value = "".join(map(str, value))
row += f" {value:<{LiquidHandlerRvizBackend._kwargs_length}}"
coordinate = ops[0].resource.get_absolute_location(x="c",y="c")
x = coordinate.x
y = coordinate.y
z = coordinate.z + 70
self.joint_state_publisher.send_resource_action(ops[0].resource.name, x, y, z, "")
async def pick_up_tips96(self, pickup: PickupTipRack, **backend_kwargs):
pass
async def drop_tips96(self, drop: DropTipRack, **backend_kwargs):
pass
async def aspirate96(
self, aspiration: Union[MultiHeadAspirationPlate, MultiHeadAspirationContainer]
):
pass
async def dispense96(self, dispense: Union[MultiHeadDispensePlate, MultiHeadDispenseContainer]):
pass
async def pick_up_resource(self, pickup: ResourcePickup):
# 执行资源拾取操作
pass
async def move_picked_up_resource(self, move: ResourceMove):
# 执行资源移动操作
pass
async def drop_resource(self, drop: ResourceDrop):
# 执行资源放置操作
pass
def can_pick_up_tip(self, channel_idx: int, tip: Tip) -> bool:
return True

File diff suppressed because it is too large Load Diff

View File

@@ -1,14 +0,0 @@
goto 171 178 57 H1
goto 171 117 57 A1
goto 172 178 130
goto 173 179 133
goto 173 180 133
goto 173 180 138
goto 173 180 125 +10mm在空的上面边缘
goto 173 180 130 取不到
goto 173 180 133 取不到
goto 173 180 135
goto 173 180 137 取到了!!!!
goto 173 180 131 弹出枪头 H1
goto 173 117 137 A1 +10mm可以取到新枪头了

View File

@@ -1,25 +0,0 @@
"""
LaiYu_Liquid 控制器模块
该模块包含了LaiYu_Liquid液体处理工作站的高级控制器
- 移液器控制器:提供液体处理的高级接口
- XYZ运动控制器提供三轴运动的高级接口
"""
# 移液器控制器导入
from .pipette_controller import PipetteController
# XYZ运动控制器导入
from .xyz_controller import XYZController
__all__ = [
# 移液器控制器
"PipetteController",
# XYZ运动控制器
"XYZController",
]
__version__ = "1.0.0"
__author__ = "LaiYu_Liquid Controller Team"
__description__ = "LaiYu_Liquid 高级控制器集合"

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1,44 +0,0 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
LaiYu液体处理设备核心模块
该模块包含LaiYu液体处理设备的核心功能组件
- LaiYu_Liquid.py: 主设备类和配置管理
- abstract_protocol.py: 抽象协议定义
- laiyu_liquid_res.py: 设备资源管理
作者: UniLab团队
版本: 2.0.0
"""
from .laiyu_liquid_main import (
LaiYuLiquid,
LaiYuLiquidConfig,
LaiYuLiquidBackend,
LaiYuLiquidDeck,
LaiYuLiquidContainer,
LaiYuLiquidTipRack,
create_quick_setup
)
from .laiyu_liquid_res import (
LaiYuLiquidDeck,
LaiYuLiquidContainer,
LaiYuLiquidTipRack
)
__all__ = [
# 主设备类
'LaiYuLiquid',
'LaiYuLiquidConfig',
'LaiYuLiquidBackend',
# 设备资源
'LaiYuLiquidDeck',
'LaiYuLiquidContainer',
'LaiYuLiquidTipRack',
# 工具函数
'create_quick_setup'
]

View File

@@ -1,529 +0,0 @@
"""
LaiYu_Liquid 抽象协议实现
该模块提供了液体资源管理和转移的抽象协议,包括:
- MaterialResource: 液体资源管理类
- transfer_liquid: 液体转移函数
- 相关的辅助类和函数
主要功能:
- 管理多孔位的液体资源
- 计算和跟踪液体体积
- 处理液体转移操作
- 提供资源状态查询
"""
import logging
from typing import Dict, List, Optional, Union, Any, Tuple
from dataclasses import dataclass, field
from enum import Enum
import uuid
import time
# pylabrobot 导入
from pylabrobot.resources import Resource, Well, Plate
logger = logging.getLogger(__name__)
class LiquidType(Enum):
"""液体类型枚举"""
WATER = "water"
ETHANOL = "ethanol"
DMSO = "dmso"
BUFFER = "buffer"
SAMPLE = "sample"
REAGENT = "reagent"
WASTE = "waste"
UNKNOWN = "unknown"
@dataclass
class LiquidInfo:
"""液体信息类"""
liquid_type: LiquidType = LiquidType.UNKNOWN
volume: float = 0.0 # 体积 (μL)
concentration: Optional[float] = None # 浓度 (mg/ml, M等)
ph: Optional[float] = None # pH值
temperature: Optional[float] = None # 温度 (°C)
viscosity: Optional[float] = None # 粘度 (cP)
density: Optional[float] = None # 密度 (g/ml)
description: str = "" # 描述信息
def __str__(self) -> str:
return f"{self.liquid_type.value}({self.description})"
@dataclass
class WellContent:
"""孔位内容类"""
volume: float = 0.0 # 当前体积 (ul)
max_volume: float = 1000.0 # 最大容量 (ul)
liquid_info: LiquidInfo = field(default_factory=LiquidInfo)
last_updated: float = field(default_factory=time.time)
@property
def is_empty(self) -> bool:
"""检查是否为空"""
return self.volume <= 0.0
@property
def is_full(self) -> bool:
"""检查是否已满"""
return self.volume >= self.max_volume
@property
def available_volume(self) -> float:
"""可用体积"""
return max(0.0, self.max_volume - self.volume)
@property
def fill_percentage(self) -> float:
"""填充百分比"""
return (self.volume / self.max_volume) * 100.0 if self.max_volume > 0 else 0.0
def can_add_volume(self, volume: float) -> bool:
"""检查是否可以添加指定体积"""
return (self.volume + volume) <= self.max_volume
def can_remove_volume(self, volume: float) -> bool:
"""检查是否可以移除指定体积"""
return self.volume >= volume
def add_volume(self, volume: float, liquid_info: Optional[LiquidInfo] = None) -> bool:
"""
添加液体体积
Args:
volume: 要添加的体积 (ul)
liquid_info: 液体信息
Returns:
bool: 是否成功添加
"""
if not self.can_add_volume(volume):
return False
self.volume += volume
if liquid_info:
self.liquid_info = liquid_info
self.last_updated = time.time()
return True
def remove_volume(self, volume: float) -> bool:
"""
移除液体体积
Args:
volume: 要移除的体积 (ul)
Returns:
bool: 是否成功移除
"""
if not self.can_remove_volume(volume):
return False
self.volume -= volume
self.last_updated = time.time()
# 如果完全清空,重置液体信息
if self.volume <= 0.0:
self.volume = 0.0
self.liquid_info = LiquidInfo()
return True
class MaterialResource:
"""
液体资源管理类
该类用于管理液体处理过程中的资源状态,包括:
- 跟踪多个孔位的液体体积和类型
- 计算总体积和可用体积
- 处理液体的添加和移除
- 提供资源状态查询
"""
def __init__(
self,
resource: Resource,
wells: Optional[List[Well]] = None,
default_max_volume: float = 1000.0
):
"""
初始化材料资源
Args:
resource: pylabrobot 资源对象
wells: 孔位列表如果为None则自动获取
default_max_volume: 默认最大体积 (ul)
"""
self.resource = resource
self.resource_id = str(uuid.uuid4())
self.default_max_volume = default_max_volume
# 获取孔位列表
if wells is None:
if hasattr(resource, 'get_wells'):
self.wells = resource.get_wells()
elif hasattr(resource, 'wells'):
self.wells = resource.wells
else:
# 如果没有孔位,创建一个虚拟孔位
self.wells = [resource]
else:
self.wells = wells
# 初始化孔位内容
self.well_contents: Dict[str, WellContent] = {}
for well in self.wells:
well_id = self._get_well_id(well)
self.well_contents[well_id] = WellContent(
max_volume=default_max_volume
)
logger.info(f"初始化材料资源: {resource.name}, 孔位数: {len(self.wells)}")
def _get_well_id(self, well: Union[Well, Resource]) -> str:
"""获取孔位ID"""
if hasattr(well, 'name'):
return well.name
else:
return str(id(well))
@property
def name(self) -> str:
"""资源名称"""
return self.resource.name
@property
def total_volume(self) -> float:
"""总液体体积"""
return sum(content.volume for content in self.well_contents.values())
@property
def total_max_volume(self) -> float:
"""总最大容量"""
return sum(content.max_volume for content in self.well_contents.values())
@property
def available_volume(self) -> float:
"""总可用体积"""
return sum(content.available_volume for content in self.well_contents.values())
@property
def well_count(self) -> int:
"""孔位数量"""
return len(self.wells)
@property
def empty_wells(self) -> List[str]:
"""空孔位列表"""
return [well_id for well_id, content in self.well_contents.items()
if content.is_empty]
@property
def full_wells(self) -> List[str]:
"""满孔位列表"""
return [well_id for well_id, content in self.well_contents.items()
if content.is_full]
@property
def occupied_wells(self) -> List[str]:
"""有液体的孔位列表"""
return [well_id for well_id, content in self.well_contents.items()
if not content.is_empty]
def get_well_content(self, well_id: str) -> Optional[WellContent]:
"""获取指定孔位的内容"""
return self.well_contents.get(well_id)
def get_well_volume(self, well_id: str) -> float:
"""获取指定孔位的体积"""
content = self.get_well_content(well_id)
return content.volume if content else 0.0
def set_well_volume(
self,
well_id: str,
volume: float,
liquid_info: Optional[LiquidInfo] = None
) -> bool:
"""
设置指定孔位的体积
Args:
well_id: 孔位ID
volume: 体积 (ul)
liquid_info: 液体信息
Returns:
bool: 是否成功设置
"""
if well_id not in self.well_contents:
logger.error(f"孔位 {well_id} 不存在")
return False
content = self.well_contents[well_id]
if volume > content.max_volume:
logger.error(f"体积 {volume} 超过最大容量 {content.max_volume}")
return False
content.volume = max(0.0, volume)
if liquid_info:
content.liquid_info = liquid_info
content.last_updated = time.time()
logger.info(f"设置孔位 {well_id} 体积: {volume}ul")
return True
def add_liquid(
self,
well_id: str,
volume: float,
liquid_info: Optional[LiquidInfo] = None
) -> bool:
"""
向指定孔位添加液体
Args:
well_id: 孔位ID
volume: 添加的体积 (ul)
liquid_info: 液体信息
Returns:
bool: 是否成功添加
"""
if well_id not in self.well_contents:
logger.error(f"孔位 {well_id} 不存在")
return False
content = self.well_contents[well_id]
success = content.add_volume(volume, liquid_info)
if success:
logger.info(f"向孔位 {well_id} 添加 {volume}ul 液体")
else:
logger.error(f"无法向孔位 {well_id} 添加 {volume}ul 液体")
return success
def remove_liquid(self, well_id: str, volume: float) -> bool:
"""
从指定孔位移除液体
Args:
well_id: 孔位ID
volume: 移除的体积 (ul)
Returns:
bool: 是否成功移除
"""
if well_id not in self.well_contents:
logger.error(f"孔位 {well_id} 不存在")
return False
content = self.well_contents[well_id]
success = content.remove_volume(volume)
if success:
logger.info(f"从孔位 {well_id} 移除 {volume}ul 液体")
else:
logger.error(f"无法从孔位 {well_id} 移除 {volume}ul 液体")
return success
def find_wells_with_volume(self, min_volume: float) -> List[str]:
"""
查找具有指定最小体积的孔位
Args:
min_volume: 最小体积 (ul)
Returns:
List[str]: 符合条件的孔位ID列表
"""
return [well_id for well_id, content in self.well_contents.items()
if content.volume >= min_volume]
def find_wells_with_space(self, min_space: float) -> List[str]:
"""
查找具有指定最小空间的孔位
Args:
min_space: 最小空间 (ul)
Returns:
List[str]: 符合条件的孔位ID列表
"""
return [well_id for well_id, content in self.well_contents.items()
if content.available_volume >= min_space]
def get_status_summary(self) -> Dict[str, Any]:
"""获取资源状态摘要"""
return {
"resource_name": self.name,
"resource_id": self.resource_id,
"well_count": self.well_count,
"total_volume": self.total_volume,
"total_max_volume": self.total_max_volume,
"available_volume": self.available_volume,
"fill_percentage": (self.total_volume / self.total_max_volume) * 100.0,
"empty_wells": len(self.empty_wells),
"full_wells": len(self.full_wells),
"occupied_wells": len(self.occupied_wells)
}
def get_detailed_status(self) -> Dict[str, Any]:
"""获取详细状态信息"""
well_details = {}
for well_id, content in self.well_contents.items():
well_details[well_id] = {
"volume": content.volume,
"max_volume": content.max_volume,
"available_volume": content.available_volume,
"fill_percentage": content.fill_percentage,
"liquid_type": content.liquid_info.liquid_type.value,
"description": content.liquid_info.description,
"last_updated": content.last_updated
}
return {
"summary": self.get_status_summary(),
"wells": well_details
}
def transfer_liquid(
source: MaterialResource,
target: MaterialResource,
volume: float,
source_well_id: Optional[str] = None,
target_well_id: Optional[str] = None,
liquid_info: Optional[LiquidInfo] = None
) -> bool:
"""
在两个材料资源之间转移液体
Args:
source: 源资源
target: 目标资源
volume: 转移体积 (ul)
source_well_id: 源孔位ID如果为None则自动选择
target_well_id: 目标孔位ID如果为None则自动选择
liquid_info: 液体信息
Returns:
bool: 转移是否成功
"""
try:
# 自动选择源孔位
if source_well_id is None:
available_wells = source.find_wells_with_volume(volume)
if not available_wells:
logger.error(f"源资源 {source.name} 没有足够体积的孔位")
return False
source_well_id = available_wells[0]
# 自动选择目标孔位
if target_well_id is None:
available_wells = target.find_wells_with_space(volume)
if not available_wells:
logger.error(f"目标资源 {target.name} 没有足够空间的孔位")
return False
target_well_id = available_wells[0]
# 检查源孔位是否有足够液体
if not source.get_well_content(source_well_id).can_remove_volume(volume):
logger.error(f"源孔位 {source_well_id} 液体不足")
return False
# 检查目标孔位是否有足够空间
if not target.get_well_content(target_well_id).can_add_volume(volume):
logger.error(f"目标孔位 {target_well_id} 空间不足")
return False
# 获取源液体信息
source_content = source.get_well_content(source_well_id)
transfer_liquid_info = liquid_info or source_content.liquid_info
# 执行转移
if source.remove_liquid(source_well_id, volume):
if target.add_liquid(target_well_id, volume, transfer_liquid_info):
logger.info(f"成功转移 {volume}ul 液体: {source.name}[{source_well_id}] -> {target.name}[{target_well_id}]")
return True
else:
# 如果目标添加失败,回滚源操作
source.add_liquid(source_well_id, volume, source_content.liquid_info)
logger.error("目标添加失败,已回滚源操作")
return False
else:
logger.error("源移除失败")
return False
except Exception as e:
logger.error(f"液体转移失败: {e}")
return False
def create_material_resource(
name: str,
resource: Resource,
initial_volumes: Optional[Dict[str, float]] = None,
liquid_info: Optional[LiquidInfo] = None,
max_volume: float = 1000.0
) -> MaterialResource:
"""
创建材料资源的便捷函数
Args:
name: 资源名称
resource: pylabrobot 资源对象
initial_volumes: 初始体积字典 {well_id: volume}
liquid_info: 液体信息
max_volume: 最大体积
Returns:
MaterialResource: 创建的材料资源
"""
material_resource = MaterialResource(
resource=resource,
default_max_volume=max_volume
)
# 设置初始体积
if initial_volumes:
for well_id, volume in initial_volumes.items():
material_resource.set_well_volume(well_id, volume, liquid_info)
return material_resource
def batch_transfer_liquid(
transfers: List[Tuple[MaterialResource, MaterialResource, float]],
liquid_info: Optional[LiquidInfo] = None
) -> List[bool]:
"""
批量液体转移
Args:
transfers: 转移列表 [(source, target, volume), ...]
liquid_info: 液体信息
Returns:
List[bool]: 每个转移操作的结果
"""
results = []
for source, target, volume in transfers:
result = transfer_liquid(source, target, volume, liquid_info=liquid_info)
results.append(result)
if not result:
logger.warning(f"批量转移中的操作失败: {source.name} -> {target.name}")
success_count = sum(results)
logger.info(f"批量转移完成: {success_count}/{len(transfers)} 成功")
return results

Some files were not shown because too many files have changed in this diff Show More