Files
Uni-Lab-OS/unilabos/app/web/api.py
Xuwznln 9aeffebde1 0.10.7 Update (#101)
* Cleanup registry to be easy-understanding (#76)

* delete deprecated mock devices

* rename categories

* combine chromatographic devices

* rename rviz simulation nodes

* organic virtual devices

* parse vessel_id

* run registry completion before merge

---------

Co-authored-by: Xuwznln <18435084+Xuwznln@users.noreply.github.com>

* fix: workstation handlers and vessel_id parsing

* fix: working dir error when input config path
feat: report publish topic when error

* modify default discovery_interval to 15s

* feat: add trace log level

* feat: 添加ChinWe设备控制类,支持串口通信和电机控制功能 (#79)

* fix: drop_tips not using auto resource select

* fix: discard_tips error

* fix: discard_tips

* fix: prcxi_res

* add: prcxi res
fix: startup slow

* feat: workstation example

* fix pumps and liquid_handler handle

* feat: 优化protocol node节点运行日志

* fix all protocol_compilers and remove deprecated devices

* feat: 新增use_remote_resource参数

* fix and remove redundant info

* bugfixes on organic protocols

* fix filter protocol

* fix protocol node

* 临时兼容错误的driver写法

* fix: prcxi import error

* use call_async in all service to avoid deadlock

* fix: figure_resource

* Update recipe.yaml

* add workstation template and battery example

* feat: add sk & ak

* update workstation base

* Create workstation_architecture.md

* refactor: workstation_base 重构为仅含业务逻辑,通信和子设备管理交给 ProtocolNode

* refactor: ProtocolNode→WorkstationNode

* Add:msgs.action (#83)

* update: Workstation dev 将版本号从 0.10.3 更新为 0.10.4 (#84)

* Add:msgs.action

* update: 将版本号从 0.10.3 更新为 0.10.4

* simplify resource system

* uncompleted refactor

* example for use WorkstationBase

* feat: websocket

* feat: websocket test

* feat: workstation example

* feat: action status

* fix: station自己的方法注册错误

* fix: 还原protocol node处理方法

* fix: build

* fix: missing job_id key

* ws test version 1

* ws test version 2

* ws protocol

* 增加物料关系上传日志

* 增加物料关系上传日志

* 修正物料关系上传

* 修复工站的tracker实例追踪失效问题

* 增加handle检测,增加material edge关系上传

* 修复event loop错误

* 修复edge上报错误

* 修复async错误

* 更新schema的title字段

* 主机节点信息等支持自动刷新

* 注册表编辑器

* 修复status密集发送时,消息出错

* 增加addr参数

* fix: addr param

* fix: addr param

* 取消labid 和 强制config输入

* Add action definitions for LiquidHandlerSetGroup and LiquidHandlerTransferGroup

- Created LiquidHandlerSetGroup.action with fields for group name, wells, and volumes.
- Created LiquidHandlerTransferGroup.action with fields for source and target group names and unit volume.
- Both actions include response fields for return information and success status.

* Add LiquidHandlerSetGroup and LiquidHandlerTransferGroup actions to CMakeLists

* Add set_group and transfer_group methods to PRCXI9300Handler and update liquid_handler.yaml

* result_info改为字典类型

* 新增uat的地址替换

* runze multiple pump support

(cherry picked from commit 49354fcf39)

* remove runze multiple software obtainer

(cherry picked from commit 8bcc92a394)

* support multiple backbone

(cherry picked from commit 4771ff2347)

* Update runze pump format

* Correct runze multiple backbone

* Update runze_multiple_backbone

* Correct runze pump multiple receive method.

* Correct runze pump multiple receive method.

* 对于PRCXI9320的transfer_group,一对多和多对多

* 移除MQTT,更新launch文档,提供注册表示例文件,更新到0.10.5

* fix import error

* fix dupe upload registry

* refactor ws client

* add server timeout

* Fix: run-column with correct vessel id (#86)

* fix run_column

* Update run_column_protocol.py

(cherry picked from commit e5aa4d940a)

* resource_update use resource_add

* 新增版位推荐功能

* 重新规定了版位推荐的入参

* update registry with nested obj

* fix protocol node log_message, added create_resource return value

* fix protocol node log_message, added create_resource return value

* try fix add protocol

* fix resource_add

* 修复移液站错误的aspirate注册表

* Feature/xprbalance-zhida (#80)

* feat(devices): add Zhida GC/MS pretreatment automation workstation

* feat(devices): add mettler_toledo xpr balance

* balance

* 重新补全zhida注册表

* PRCXI9320 json

* PRCXI9320 json

* PRCXI9320 json

* fix resource download

* remove class for resource

* bump version to 0.10.6

* 更新所有注册表

* 修复protocolnode的兼容性

* 修复protocolnode的兼容性

* Update install md

* Add Defaultlayout

* 更新物料接口

* fix dict to tree/nested-dict converter

* coin_cell_station draft

* refactor: rename "station_resource" to "deck"

* add standardized BIOYOND resources: bottle_carrier, bottle

* refactor and add BIOYOND resources tests

* add BIOYOND deck assignment and pass all tests

* fix: update resource with correct structure; remove deprecated liquid_handler set_group action

* feat: 将新威电池测试系统驱动与配置文件并入 workstation_dev_YB2 (#92)

* feat: 新威电池测试系统驱动与注册文件

* feat: bring neware driver & battery.json into workstation_dev_YB2

* add bioyond studio draft

* bioyond station with communication init and resource sync

* fix bioyond station and registry

* fix: update resource with correct structure; remove deprecated liquid_handler set_group action

* frontend_docs

* create/update resources with POST/PUT for big amount/ small amount data

* create/update resources with POST/PUT for big amount/ small amount data

* refactor: add itemized_carrier instead of carrier consists of ResourceHolder

* create warehouse by factory func

* update bioyond launch json

* add child_size for itemized_carrier

* fix bioyond resource io

* Workstation templates: Resources and its CRUD, and workstation tasks (#95)

* coin_cell_station draft

* refactor: rename "station_resource" to "deck"

* add standardized BIOYOND resources: bottle_carrier, bottle

* refactor and add BIOYOND resources tests

* add BIOYOND deck assignment and pass all tests

* fix: update resource with correct structure; remove deprecated liquid_handler set_group action

* feat: 将新威电池测试系统驱动与配置文件并入 workstation_dev_YB2 (#92)

* feat: 新威电池测试系统驱动与注册文件

* feat: bring neware driver & battery.json into workstation_dev_YB2

* add bioyond studio draft

* bioyond station with communication init and resource sync

* fix bioyond station and registry

* create/update resources with POST/PUT for big amount/ small amount data

* refactor: add itemized_carrier instead of carrier consists of ResourceHolder

* create warehouse by factory func

* update bioyond launch json

* add child_size for itemized_carrier

* fix bioyond resource io

---------

Co-authored-by: h840473807 <47357934+h840473807@users.noreply.github.com>
Co-authored-by: Xie Qiming <97236197+Andy6M@users.noreply.github.com>

* 更新物料接口

* Workstation dev yb2 (#100)

* Refactor and extend reaction station action messages

* Refactor dispensing station tasks to enhance parameter clarity and add batch processing capabilities

- Updated `create_90_10_vial_feeding_task` to include detailed parameters for 90%/10% vial feeding, improving clarity and usability.
- Introduced `create_batch_90_10_vial_feeding_task` for batch processing of 90%/10% vial feeding tasks with JSON formatted input.
- Added `create_batch_diamine_solution_task` for batch preparation of diamine solution, also utilizing JSON formatted input.
- Refined `create_diamine_solution_task` to include additional parameters for better task configuration.
- Enhanced schema descriptions and default values for improved user guidance.

* 修复to_plr_resources

* add update remove

* 支持选择器注册表自动生成
支持转运物料

* 修复资源添加

* 修复transfer_resource_to_another生成

* 更新transfer_resource_to_another参数,支持spot入参

* 新增test_resource动作

* fix host_node error

* fix host_node test_resource error

* fix host_node test_resource error

* 过滤本地动作

* 移动内部action以兼容host node

* 修复同步任务报错不显示的bug

* feat: 允许返回非本节点物料,后面可以通过decoration进行区分,就不进行warning了

* update todo

* modify bioyond/plr converter, bioyond resource registry, and tests

* pass the tests

* update todo

* add conda-pack-build.yml

* add auto install script for conda-pack-build.yml

(cherry picked from commit 172599adcf)

* update conda-pack-build.yml

* update conda-pack-build.yml

* update conda-pack-build.yml

* update conda-pack-build.yml

* update conda-pack-build.yml

* Add version in __init__.py
Update conda-pack-build.yml
Add create_zip_archive.py

* Update conda-pack-build.yml

* Update conda-pack-build.yml (with mamba)

* Update conda-pack-build.yml

* Fix FileNotFoundError

* Try fix 'charmap' codec can't encode characters in position 16-23: character maps to <undefined>

* Fix unilabos msgs search error

* Fix environment_check.py

* Update recipe.yaml

* Update registry. Update uuid loop figure method. Update install docs.

* Fix nested conda pack

* Fix one-key installation path error

* Bump version to 0.10.7

* Workshop bj (#99)

* Add LaiYu Liquid device integration and tests

Introduce LaiYu Liquid device implementation, including backend, controllers, drivers, configuration, and resource files. Add hardware connection, tip pickup, and simplified test scripts, as well as experiment and registry configuration for LaiYu Liquid. Documentation and .gitignore for the device are also included.

* feat(LaiYu_Liquid): 重构设备模块结构并添加硬件文档

refactor: 重新组织LaiYu_Liquid模块目录结构
docs: 添加SOPA移液器和步进电机控制指令文档
fix: 修正设备配置中的最大体积默认值
test: 新增工作台配置测试用例
chore: 删除过时的测试脚本和配置文件

* add

* 重构: 将 LaiYu_Liquid.py 重命名为 laiyu_liquid_main.py 并更新所有导入引用

- 使用 git mv 将 LaiYu_Liquid.py 重命名为 laiyu_liquid_main.py
- 更新所有相关文件中的导入引用
- 保持代码功能不变,仅改善命名一致性
- 测试确认所有导入正常工作

* 修复: 在 core/__init__.py 中添加 LaiYuLiquidBackend 导出

- 添加 LaiYuLiquidBackend 到导入列表
- 添加 LaiYuLiquidBackend 到 __all__ 导出列表
- 确保所有主要类都可以正确导入

* 修复大小写文件夹名字

* 电池装配工站二次开发教程(带目录)上传至dev (#94)

* 电池装配工站二次开发教程

* Update intro.md

* 物料教程

* 更新物料教程,json格式注释

* Update prcxi driver & fix transfer_liquid mix_times (#90)

* Update prcxi driver & fix transfer_liquid mix_times

* fix: correct mix_times type

* Update liquid_handler registry

* test: prcxi.py

* Update registry from pr

* fix ony-key script not exist

* clean files

---------

Co-authored-by: Junhan Chang <changjh@dp.tech>
Co-authored-by: ZiWei <131428629+ZiWei09@users.noreply.github.com>
Co-authored-by: Guangxin Zhang <guangxin.zhang.bio@gmail.com>
Co-authored-by: Xie Qiming <97236197+Andy6M@users.noreply.github.com>
Co-authored-by: h840473807 <47357934+h840473807@users.noreply.github.com>
Co-authored-by: LccLink <1951855008@qq.com>
Co-authored-by: lixinyu1011 <61094742+lixinyu1011@users.noreply.github.com>
Co-authored-by: shiyubo0410 <shiyubo@dp.tech>
2025-10-12 23:34:26 +08:00

1266 lines
49 KiB
Python
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

"""
API模块
提供API路由和处理函数
"""
from fastapi import APIRouter, WebSocket, WebSocketDisconnect
import asyncio
import yaml
from unilabos.app.web.controler import devices, job_add, job_info
from unilabos.app.model import (
Resp,
RespCode,
JobStatusResp,
JobAddResp,
JobAddReq,
)
from unilabos.app.web.utils.host_utils import get_host_node_info
from unilabos.registry.registry import lab_registry
from unilabos.utils.type_check import NoAliasDumper
# 创建API路由器
api = APIRouter()
admin = APIRouter()
# 存储所有活动的WebSocket连接
active_connections: set[WebSocket] = set()
# 存储注册表编辑器的WebSocket连接
registry_editor_connections: set[WebSocket] = set()
# 存储状态页面的WebSocket连接
status_page_connections: set[WebSocket] = set()
# 状态跟踪变量,用于差异检测
_static_data_sent_connections: set[WebSocket] = set()
_previous_host_node_info: dict = {}
_previous_local_devices: dict = {}
def compute_host_node_diff(current: dict, previous: dict) -> dict:
"""计算主机节点信息的差异,只返回有变化的部分"""
diff = {}
# 检查可用性变化
if current.get("available") != previous.get("available"):
diff["available"] = current.get("available")
# 检查设备列表变化
current_devices = current.get("devices", {})
previous_devices = previous.get("devices", {})
if current_devices != previous_devices:
diff["devices"] = current_devices
# 检查动作客户端变化
current_action_clients = current.get("action_clients", {})
previous_action_clients = previous.get("action_clients", {})
if current_action_clients != previous_action_clients:
diff["action_clients"] = current_action_clients
# 检查订阅主题变化
current_topics = current.get("subscribed_topics", [])
previous_topics = previous.get("subscribed_topics", [])
if current_topics != previous_topics:
diff["subscribed_topics"] = current_topics
# 设备状态始终包含(因为需要实时更新)
if "device_status" in current:
diff["device_status"] = current["device_status"]
diff["device_status_timestamps"] = current.get("device_status_timestamps", {})
return diff
async def broadcast_device_status():
"""广播设备状态到所有连接的客户端"""
while True:
try:
# 获取最新的设备状态
host_info = get_host_node_info()
if host_info["available"]:
# 准备要发送的数据
status_data = {
"type": "device_status",
"data": {
"device_status": host_info["device_status"],
"device_status_timestamps": host_info["device_status_timestamps"],
},
}
# 发送到所有连接的客户端
for connection in active_connections:
try:
await connection.send_json(status_data)
except Exception as e:
print(f"Error sending to client: {e}")
active_connections.remove(connection)
await asyncio.sleep(1) # 每秒更新一次
except Exception as e:
print(f"Error in broadcast: {e}")
await asyncio.sleep(1)
async def broadcast_status_page_data():
"""广播状态页面数据到所有连接的客户端(优化版:增量更新)"""
global _previous_local_devices, _static_data_sent_connections, _previous_host_node_info
while True:
try:
if status_page_connections:
from unilabos.app.web.utils.host_utils import get_host_node_info
from unilabos.app.web.utils.ros_utils import get_ros_node_info
from unilabos.app.web.utils.device_utils import get_registry_info
from unilabos.config.config import BasicConfig
from unilabos.registry.registry import lab_registry
from unilabos.ros.msgs.message_converter import msg_converter_manager
from unilabos.utils.type_check import TypeEncoder
import json
# 获取当前数据
host_node_info = get_host_node_info()
ros_node_info = get_ros_node_info()
# 检查需要发送静态数据的新连接
new_connections = status_page_connections - _static_data_sent_connections
# 向新连接发送静态数据Device Types、Resource Types、Converter Modules
if new_connections:
devices = []
resources = []
modules = {"names": [], "classes": [], "displayed_count": 0, "total_count": 0}
if lab_registry:
devices = json.loads(
json.dumps(lab_registry.obtain_registry_device_info(), ensure_ascii=False, cls=TypeEncoder)
)
# 资源类型
for resource_id, resource_info in lab_registry.resource_type_registry.items():
resources.append(
{
"id": resource_id,
"name": resource_info.get("name", "未命名"),
"file_path": resource_info.get("file_path", ""),
}
)
# 获取导入的模块
if msg_converter_manager:
modules["names"] = msg_converter_manager.list_modules()
all_classes = [i for i in msg_converter_manager.list_classes() if "." in i]
modules["total_count"] = len(all_classes)
modules["classes"] = all_classes
# 静态数据
registry_info = get_registry_info()
static_data = {
"type": "static_data_init",
"data": {
"devices": devices,
"resources": resources,
"modules": modules,
"registry_info": registry_info,
"is_host_mode": BasicConfig.is_host_mode,
"host_node_info": host_node_info, # 添加主机节点初始信息
"ros_node_info": ros_node_info, # 添加本地设备初始信息
},
}
# 发送到新连接
disconnected_new_connections = set()
for connection in new_connections:
try:
await connection.send_json(static_data)
_static_data_sent_connections.add(connection)
except Exception as e:
print(f"Error sending static data to new client: {e}")
disconnected_new_connections.add(connection)
# 清理断开的新连接
for conn in disconnected_new_connections:
status_page_connections.discard(conn)
_static_data_sent_connections.discard(conn)
# 检查主机节点信息是否有变更
host_node_diff = compute_host_node_diff(host_node_info, _previous_host_node_info)
host_changed = bool(host_node_diff)
# 检查Local Devices是否有变更
current_devices = ros_node_info.get("registered_devices", {})
devices_changed = current_devices != _previous_local_devices
# 只有当有真正的变化时才发送更新
if host_changed or devices_changed:
# 发送增量更新数据
update_data = {
"type": "incremental_update",
"data": {
"timestamp": asyncio.get_event_loop().time(),
},
}
# 只包含有变化的主机节点信息
if host_changed:
update_data["data"]["host_node_info"] = host_node_diff
# 如果Local Devices发生变更添加到更新数据中
if devices_changed:
update_data["data"]["ros_node_info"] = ros_node_info
_previous_local_devices = current_devices.copy()
# 更新主机节点状态
if host_changed:
_previous_host_node_info = host_node_info.copy()
# 发送增量更新到所有连接
disconnected_connections = set()
for connection in status_page_connections:
try:
await connection.send_json(update_data)
except Exception as e:
print(f"Error sending incremental update to client: {e}")
disconnected_connections.add(connection)
# 清理断开的连接
for conn in disconnected_connections:
status_page_connections.discard(conn)
_static_data_sent_connections.discard(conn)
await asyncio.sleep(1) # 每秒检查一次更新
except Exception as e:
print(f"Error in status page broadcast: {e}")
await asyncio.sleep(1)
@api.websocket("/ws/device_status")
async def websocket_device_status(websocket: WebSocket):
"""WebSocket端点用于实时获取设备状态"""
await websocket.accept()
active_connections.add(websocket)
try:
while True:
# 保持连接活跃
await websocket.receive_text()
except WebSocketDisconnect:
active_connections.remove(websocket)
except Exception as e:
print(f"WebSocket error: {e}")
active_connections.remove(websocket)
@api.websocket("/ws/registry_editor")
async def websocket_registry_editor(websocket: WebSocket):
"""WebSocket端点用于注册表编辑器"""
await websocket.accept()
registry_editor_connections.add(websocket)
try:
while True:
# 接收来自客户端的消息
message = await websocket.receive_text()
import json
data = json.loads(message)
if data.get("type") == "import_file":
await handle_file_import(websocket, data["data"])
elif data.get("type") == "analyze_file":
await handle_file_analysis(websocket, data["data"])
elif data.get("type") == "analyze_file_content":
await handle_file_content_analysis(websocket, data["data"])
elif data.get("type") == "import_file_content":
await handle_file_content_import(websocket, data["data"])
except WebSocketDisconnect:
registry_editor_connections.remove(websocket)
except Exception as e:
print(f"Registry Editor WebSocket error: {e}")
if websocket in registry_editor_connections:
registry_editor_connections.remove(websocket)
@api.websocket("/ws/status_page")
async def websocket_status_page(websocket: WebSocket):
"""WebSocket端点用于状态页面实时数据更新"""
await websocket.accept()
status_page_connections.add(websocket)
try:
while True:
# 接收来自客户端的消息(用于保持连接活跃)
message = await websocket.receive_text()
# 状态页面通常只需要接收数据,不需要发送复杂指令
except WebSocketDisconnect:
status_page_connections.remove(websocket)
except Exception as e:
print(f"Status Page WebSocket error: {e}")
if websocket in status_page_connections:
status_page_connections.remove(websocket)
async def handle_file_analysis(websocket: WebSocket, request_data: dict):
"""处理文件分析请求,获取文件中的类列表"""
import json
import os
import sys
import inspect
import traceback
from pathlib import Path
from unilabos.config.config import BasicConfig
file_path = request_data.get("file_path")
async def send_log(message: str, level: str = "info"):
"""发送日志消息到客户端"""
try:
await websocket.send_text(json.dumps({"type": "log", "message": message, "level": level}))
except Exception as e:
print(f"Failed to send log: {e}")
async def send_analysis_result(result_data: dict):
"""发送分析结果到客户端"""
try:
await websocket.send_text(json.dumps({"type": "file_analysis_result", "data": result_data}))
except Exception as e:
print(f"Failed to send analysis result: {e}")
try:
# 验证文件路径参数
if not file_path:
await send_analysis_result({"success": False, "error": "文件路径为空", "file_path": ""})
return
# 获取工作目录并构建完整路径
working_dir_str = getattr(BasicConfig, "working_dir", None) or os.getcwd()
working_dir = Path(working_dir_str)
full_file_path = working_dir / file_path
# 验证文件路径
if not full_file_path.exists():
await send_analysis_result(
{"success": False, "error": f"文件路径不存在: {file_path}", "file_path": file_path}
)
return
await send_log(f"开始分析文件: {file_path}")
# 验证文件是Python文件
if not file_path.endswith(".py"):
await send_analysis_result({"success": False, "error": "请选择Python文件 (.py)", "file_path": file_path})
return
full_file_path = full_file_path.absolute()
await send_log(f"文件绝对路径: {full_file_path}")
# 添加文件目录到sys.path
file_dir = str(full_file_path.parent)
if file_dir not in sys.path:
sys.path.insert(0, file_dir)
await send_log(f"已添加路径到sys.path: {file_dir}")
# 确定模块名
module_name = full_file_path.stem
await send_log(f"使用模块名: {module_name}")
# 导入模块进行分析
try:
# 如果模块已经导入,先删除以便重新导入
if module_name in sys.modules:
del sys.modules[module_name]
await send_log(f"已删除旧模块: {module_name}")
import importlib.util
spec = importlib.util.spec_from_file_location(module_name, full_file_path)
if spec is None or spec.loader is None:
await send_analysis_result(
{"success": False, "error": "无法创建模块规范", "file_path": str(full_file_path)}
)
return
module = importlib.util.module_from_spec(spec)
sys.modules[module_name] = module
spec.loader.exec_module(module)
await send_log(f"成功导入模块用于分析: {module_name}")
except Exception as e:
await send_analysis_result(
{"success": False, "error": f"导入模块失败: {str(e)}", "file_path": str(full_file_path)}
)
return
# 分析模块中的类
classes = []
for name in dir(module):
try:
obj = getattr(module, name)
if isinstance(obj, type) and obj.__module__ == module_name:
# 获取类的文档字符串
docstring = inspect.getdoc(obj) or ""
# 只取第一行作为简短描述
short_desc = docstring.split("\n")[0] if docstring else ""
classes.append({"name": name, "docstring": short_desc, "full_docstring": docstring})
except Exception as e:
await send_log(f"分析类 {name} 时出错: {str(e)}", "warning")
continue
if not classes:
await send_analysis_result(
{
"success": False,
"error": "模块中未找到任何类定义",
"file_path": str(full_file_path),
"module_name": module_name,
}
)
return
await send_log(f"找到 {len(classes)} 个类: {[cls['name'] for cls in classes]}")
# 发送分析结果
await send_analysis_result(
{"success": True, "file_path": str(full_file_path), "module_name": module_name, "classes": classes}
)
except Exception as e:
await send_analysis_result(
{
"success": False,
"error": f"分析过程中发生错误: {str(e)}",
"file_path": file_path,
"traceback": traceback.format_exc(),
}
)
async def handle_file_content_analysis(websocket: WebSocket, request_data: dict):
"""处理文件内容分析请求,直接分析上传的文件内容"""
import json
import os
import sys
import inspect
import traceback
import tempfile
from pathlib import Path
file_name = request_data.get("file_name")
file_content = request_data.get("file_content")
file_size = request_data.get("file_size", 0)
async def send_log(message: str, level: str = "info"):
"""发送日志消息到客户端"""
try:
await websocket.send_text(json.dumps({"type": "log", "message": message, "level": level}))
except Exception as e:
print(f"Failed to send log: {e}")
async def send_analysis_result(result_data: dict):
"""发送分析结果到客户端"""
try:
await websocket.send_text(json.dumps({"type": "file_analysis_result", "data": result_data}))
except Exception as e:
print(f"Failed to send analysis result: {e}")
try:
# 验证文件内容
if not file_name or not file_content:
await send_analysis_result({"success": False, "error": "文件名或文件内容为空", "file_name": file_name})
return
await send_log(f"开始分析文件: {file_name} ({file_size} 字节)")
# 验证文件是Python文件
if not file_name.endswith(".py"):
await send_analysis_result({"success": False, "error": "请选择Python文件 (.py)", "file_name": file_name})
return
# 创建临时文件
with tempfile.NamedTemporaryFile(mode="w", suffix=".py", delete=False, encoding="utf-8") as temp_file:
temp_file.write(file_content)
temp_file_path = temp_file.name
await send_log(f"创建临时文件: {temp_file_path}")
try:
# 添加临时文件目录到sys.path
temp_dir = str(Path(temp_file_path).parent)
if temp_dir not in sys.path:
sys.path.insert(0, temp_dir)
await send_log(f"已添加临时目录到sys.path: {temp_dir}")
# 确定模块名(去掉.py扩展名
module_name = file_name.replace(".py", "").replace("-", "_").replace(" ", "_")
await send_log(f"使用模块名: {module_name}")
# 导入模块进行分析
try:
# 如果模块已经导入,先删除以便重新导入
if module_name in sys.modules:
del sys.modules[module_name]
await send_log(f"已删除旧模块: {module_name}")
import importlib.util
spec = importlib.util.spec_from_file_location(module_name, temp_file_path)
if spec is None or spec.loader is None:
raise Exception("无法创建模块规范")
module = importlib.util.module_from_spec(spec)
sys.modules[module_name] = module
spec.loader.exec_module(module)
await send_log(f"成功导入模块用于分析: {module_name}")
except Exception as e:
await send_analysis_result(
{"success": False, "error": f"导入模块失败: {str(e)}", "file_name": file_name}
)
return
# 分析模块中的类
classes = []
for name in dir(module):
try:
obj = getattr(module, name)
if isinstance(obj, type) and obj.__module__ == module_name:
# 获取类的文档字符串
docstring = inspect.getdoc(obj) or ""
# 只取第一行作为简短描述
short_desc = docstring.split("\n")[0] if docstring else "无描述"
classes.append({"name": name, "docstring": short_desc, "full_docstring": docstring})
except Exception as e:
await send_log(f"分析类 {name} 时出错: {str(e)}", "warning")
continue
if not classes:
await send_analysis_result(
{
"success": False,
"error": "模块中未找到任何类定义",
"file_name": file_name,
"module_name": module_name,
}
)
return
await send_log(f"找到 {len(classes)} 个类: {[cls['name'] for cls in classes]}")
# 发送分析结果
await send_analysis_result(
{
"success": True,
"file_name": file_name,
"module_name": module_name,
"classes": classes,
"temp_file_path": temp_file_path, # 保存临时文件路径供后续使用
}
)
finally:
# 清理临时文件(在导入完成后再删除)
try:
if os.path.exists(temp_file_path):
# 延迟删除,给导入操作留出时间
import threading
def delayed_cleanup():
import time
time.sleep(60) # 等待60秒后删除
try:
os.unlink(temp_file_path)
except OSError:
pass
threading.Thread(target=delayed_cleanup, daemon=True).start()
except Exception as e:
await send_log(f"清理临时文件时出错: {str(e)}", "warning")
except Exception as e:
await send_analysis_result(
{
"success": False,
"error": f"分析过程中发生错误: {str(e)}",
"file_name": file_name,
"traceback": traceback.format_exc(),
}
)
async def handle_file_content_import(websocket: WebSocket, request_data: dict):
"""处理基于文件内容的导入请求"""
import json
import os
import sys
import traceback
import tempfile
from pathlib import Path
file_name = request_data.get("file_name")
file_content = request_data.get("file_content")
file_size = request_data.get("file_size", 0)
registry_type = request_data.get("registry_type", "device")
class_name = request_data.get("class_name")
module_prefix = request_data.get("module_prefix", "")
async def send_log(message: str, level: str = "info"):
"""发送日志消息到客户端"""
try:
await websocket.send_text(json.dumps({"type": "log", "message": message, "level": level}))
except Exception as e:
print(f"Failed to send log: {e}")
async def send_progress(message: str):
"""发送进度消息到客户端"""
try:
await websocket.send_text(json.dumps({"type": "progress", "message": message}))
except Exception as e:
print(f"Failed to send progress: {e}")
async def send_error(message: str):
"""发送错误消息到客户端"""
try:
await websocket.send_text(json.dumps({"type": "error", "message": message}))
except Exception as e:
print(f"Failed to send error: {e}")
async def send_result(result_data: dict):
"""发送结果数据到客户端"""
try:
await websocket.send_text(json.dumps({"type": "result", "data": result_data}))
except Exception as e:
print(f"Failed to send result: {e}")
try:
# 验证输入参数
if not file_name or not file_content or not class_name:
await send_error("文件名、文件内容或类名为空")
return
await send_log(f"开始处理文件: {file_name} ({file_size} 字节)")
await send_progress("正在创建临时文件...")
# 创建临时文件
with tempfile.NamedTemporaryFile(mode="w", suffix=".py", delete=False, encoding="utf-8") as temp_file:
temp_file.write(file_content)
temp_file_path = temp_file.name
await send_log(f"创建临时文件: {temp_file_path}")
# 添加临时文件目录到sys.path
temp_dir = str(Path(temp_file_path).parent)
if temp_dir not in sys.path:
sys.path.insert(0, temp_dir)
await send_log(f"已添加临时目录到sys.path: {temp_dir}")
# 确定模块名
module_name = file_name.replace(".py", "").replace("-", "_").replace(" ", "_")
# 如果有 module_prefix则使用完整的模块路径
full_module_name = f"{module_prefix}.{module_name}" if module_prefix else module_name
await send_log(f"使用模块名: {module_name}")
if module_prefix:
await send_log(f"完整模块路径: {full_module_name}")
# 导入模块
try:
# 如果模块已经导入,先删除以便重新导入
if module_name in sys.modules:
del sys.modules[module_name]
await send_log(f"已删除旧模块: {module_name}")
import importlib.util
spec = importlib.util.spec_from_file_location(module_name, temp_file_path)
if spec is None or spec.loader is None:
await send_error("无法创建模块规范")
return
module = importlib.util.module_from_spec(spec)
sys.modules[module_name] = module
spec.loader.exec_module(module)
await send_log(f"成功导入模块: {module_name}")
except Exception as e:
await send_error(f"导入模块失败: {str(e)}")
return
# 验证类存在
if not hasattr(module, class_name):
await send_error(f"模块中未找到类: {class_name}")
return
target_class = getattr(module, class_name)
await send_log(f"找到目标类: {class_name}")
# 使用registry.py的增强类信息功能进行分析
await send_progress("正在生成注册表信息...")
try:
from unilabos.utils.import_manager import get_enhanced_class_info
# 分析类信息
enhanced_info = get_enhanced_class_info(f"{full_module_name}:{class_name}", use_dynamic=True)
if not enhanced_info.get("dynamic_import_success", False):
await send_error("动态导入类信息失败")
return
await send_log("成功分析类信息")
# 根据注册表类型生成不同的schema
if registry_type == "resource":
# 资源类型的简单结构
category_name = file_name.replace(".py", "") if file_name else "unknown"
registry_schema = {
"description": enhanced_info.get("class_docstring", ""),
"category": [category_name],
"class": {
"module": f"{full_module_name}:{class_name}",
"type": "python",
},
"handles": [],
"icon": "",
"init_param_schema": {},
"registry_type": "resource",
"version": "1.0.0",
"file_path": f"uploaded_file://{file_name}",
}
else:
# 设备类型的复杂结构
registry_schema = {
"description": enhanced_info.get("class_docstring", ""),
"class": {
"module": f"{full_module_name}:{class_name}",
"type": "python",
"status_types": {k: v["return_type"] for k, v in enhanced_info["status_methods"].items()},
"action_value_mappings": {},
},
"version": "1.0.0",
"handles": [],
"init_param_schema": {},
"registry_type": "device",
"file_path": f"uploaded_file://{file_name}",
}
# 处理动作方法(仅对设备类型)
for method_name, method_info in enhanced_info["action_methods"].items():
registry_schema["class"]["action_value_mappings"][f"auto-{method_name}"] = {
"type": "UniLabJsonCommandAsync" if method_info["is_async"] else "UniLabJsonCommand",
"goal": {},
"feedback": {},
"result": {},
"args": method_info["args"],
"description": method_info.get("docstring", ""),
}
await send_log("成功生成注册表schema")
# 格式化状态方法信息
status_info = {}
for status_name, status_data in enhanced_info.get("status_methods", {}).items():
status_info[status_name] = {
"return_type": status_data.get("return_type", "未知类型"),
"docstring": status_data.get("docstring", "无描述"),
"is_property": status_data.get("is_property", False),
}
# 格式化动作方法信息
action_info = {}
for action_name, action_data in enhanced_info.get("action_methods", {}).items():
args = action_data.get("args", [])
action_info[action_name] = {
"param_count": len(args),
"params": [
{"name": arg.get("name", ""), "type": arg.get("type", ""), "default": arg.get("default")}
for arg in args
],
"is_async": action_data.get("is_async", False),
"docstring": action_data.get("docstring", "无描述"),
"return_suggestion": "建议返回字典类型 (dict) 以便更好地结构化结果数据",
}
# 准备结果数据
result = {
"class_info": {
"class_name": class_name,
"module_name": module_name,
"module_prefix": module_prefix,
"full_module_name": full_module_name,
"file_name": file_name,
"file_size": file_size,
"docstring": enhanced_info.get("class_docstring", ""),
"dynamic_import_success": enhanced_info.get("dynamic_import_success", False),
"registry_type": registry_type,
},
"registry_schema": registry_schema,
"class_analysis": {
"status_methods": status_info,
"action_methods": action_info,
"init_params": enhanced_info.get("init_params", []),
"status_methods_count": len(status_info),
"action_methods_count": len(action_info),
},
# 保持向后兼容
"action_methods": enhanced_info.get("action_methods", {}),
"status_methods": enhanced_info.get("status_methods", {}),
}
# 发送结果
await send_result(result)
await send_log("分析完成")
except Exception as e:
await send_error(f"分析类信息时发生错误: {str(e)}")
await send_log(f"详细错误信息: {traceback.format_exc()}")
return
finally:
# 清理临时文件
try:
if os.path.exists(temp_file_path):
import threading
def delayed_cleanup():
import time
time.sleep(30) # 等待30秒后删除
try:
os.unlink(temp_file_path)
except OSError:
pass
threading.Thread(target=delayed_cleanup, daemon=True).start()
except Exception as e:
await send_log(f"清理临时文件时出错: {str(e)}", "warning")
except Exception as e:
await send_error(f"处理过程中发生错误: {str(e)}")
await send_log(f"详细错误信息: {traceback.format_exc()}")
async def handle_file_import(websocket: WebSocket, request_data: dict):
"""处理文件导入请求"""
import json
import os
import sys
import traceback
from pathlib import Path
from unilabos.config.config import BasicConfig
file_path = request_data.get("file_path")
registry_type = request_data.get("registry_type", "device")
class_name = request_data.get("class_name")
module_name = request_data.get("module_name")
description = request_data.get("description", "")
safe_class_name = request_data.get("safe_class_name", "")
icon = request_data.get("icon", "")
module_prefix = request_data.get("module_prefix", "")
handles = request_data.get("handles", [])
async def send_log(message: str, level: str = "info"):
"""发送日志消息到客户端"""
try:
await websocket.send_text(json.dumps({"type": "log", "message": message, "level": level}))
except Exception as e:
print(f"Failed to send log: {e}")
async def send_progress(message: str):
"""发送进度消息到客户端"""
try:
await websocket.send_text(json.dumps({"type": "progress", "message": message}))
except Exception as e:
print(f"Failed to send progress: {e}")
async def send_error(message: str):
"""发送错误消息到客户端"""
try:
await websocket.send_text(json.dumps({"type": "error", "message": message}))
except Exception as e:
print(f"Failed to send error: {e}")
async def send_result(result_data: dict):
"""发送结果数据到客户端"""
try:
await websocket.send_text(json.dumps({"type": "result", "data": result_data}))
except Exception as e:
print(f"Failed to send result: {e}")
try:
# 验证文件路径参数
if not file_path:
await send_error("文件路径为空")
return
# 获取工作目录并构建完整路径
working_dir_str = getattr(BasicConfig, "working_dir", None) or os.getcwd()
working_dir = Path(working_dir_str)
full_file_path = working_dir / file_path
# 验证文件路径
if not full_file_path.exists():
await send_error(f"文件路径不存在: {file_path}")
return
await send_log(f"开始处理文件: {file_path}")
await send_progress("正在验证文件...")
# 验证文件是Python文件
if not file_path.endswith(".py"):
await send_error("请选择Python文件 (.py)")
return
full_file_path = full_file_path.absolute()
await send_log(f"文件绝对路径: {full_file_path}")
# 动态导入模块
await send_progress("正在导入模块...")
# 添加文件目录到sys.path
file_dir = str(full_file_path.parent)
if file_dir not in sys.path:
sys.path.insert(0, file_dir)
await send_log(f"已添加路径到sys.path: {file_dir}")
# 确定模块名
if not module_name:
module_name = full_file_path.stem
# 如果有 module_prefix则使用完整的模块路径
full_module_name = f"{module_prefix}.{module_name}" if module_prefix else module_name
await send_log(f"使用模块名: {module_name}")
if module_prefix:
await send_log(f"完整模块路径: {full_module_name}")
# 导入模块
try:
# 如果模块已经导入,先删除以便重新导入
if module_name in sys.modules:
del sys.modules[module_name]
await send_log(f"已删除旧模块: {module_name}")
import importlib.util
spec = importlib.util.spec_from_file_location(module_name, full_file_path)
if spec is None or spec.loader is None:
await send_error("无法创建模块规范")
return
module = importlib.util.module_from_spec(spec)
sys.modules[module_name] = module
spec.loader.exec_module(module)
await send_log(f"成功导入模块: {module_name}")
except Exception as e:
await send_error(f"导入模块失败: {str(e)}")
return
# 分析模块
await send_progress("正在分析模块...")
# 获取模块中的所有类
classes = []
for name in dir(module):
obj = getattr(module, name)
if isinstance(obj, type) and obj.__module__ == module_name:
classes.append((name, obj))
if not classes:
await send_error("模块中未找到任何类定义")
return
await send_log(f"找到 {len(classes)} 个类: {[name for name, _ in classes]}")
# 确定要分析的类
target_class = None
target_class_name = None
if class_name:
# 用户指定了类名
for name, cls in classes:
if name == class_name:
target_class = cls
target_class_name = name
break
if not target_class:
await send_error(f"未找到指定的类: {class_name}")
return
else:
# 自动选择第一个类
target_class_name, target_class = classes[0]
await send_log(f"自动选择类: {target_class_name}")
# 使用registry.py的增强类信息功能进行分析
await send_progress("正在生成注册表信息...")
try:
from unilabos.utils.import_manager import get_enhanced_class_info
# 分析类信息
enhanced_info = get_enhanced_class_info(f"{full_module_name}:{target_class_name}", use_dynamic=True)
if not enhanced_info.get("dynamic_import_success", False):
await send_error("动态导入类信息失败")
return
await send_log("成功分析类信息")
# 根据注册表类型生成不同的schema
if registry_type == "resource":
# 资源类型的简单结构
category_name = Path(file_path).stem if file_path else "unknown"
registry_schema = {
"description": description or enhanced_info.get("class_docstring", ""),
"category": [category_name],
"class": {
"module": f"{full_module_name}:{target_class_name}",
"type": "python",
},
"handles": handles,
"icon": icon,
"init_param_schema": {},
"registry_type": "resource",
"version": "1.0.0",
}
else:
# 设备类型的复杂结构
registry_schema = {
"description": description or enhanced_info.get("class_docstring", ""),
"class": {
"module": f"{full_module_name}:{target_class_name}",
"type": "python",
"status_types": {k: v["return_type"] for k, v in enhanced_info["status_methods"].items()},
"action_value_mappings": {
f"auto-{k}": {
"type": "UniLabJsonCommandAsync" if v["is_async"] else "UniLabJsonCommand",
"goal": {},
"feedback": {},
"result": {},
"schema": lab_registry._generate_unilab_json_command_schema(v["args"], k),
"goal_default": {i["name"]: i["default"] for i in v["args"]},
"handles": [],
}
# 不生成已配置action的动作
for k, v in enhanced_info["action_methods"].items()
},
},
"version": "1.0.0",
"handles": handles,
"icon": icon,
"init_param_schema": {
"config": lab_registry._generate_unilab_json_command_schema(
enhanced_info["init_params"], "__init__"
)["properties"]["goal"],
"data": lab_registry._generate_status_types_schema(enhanced_info["status_methods"]),
},
"registry_type": "device",
}
await send_log("成功生成注册表schema")
# 创建最终的YAML配置使用ID作为根键
if safe_class_name:
item_id = safe_class_name
else:
class_name_safe = (target_class_name or "unknown").lower()
if registry_type == "resource":
# 资源ID通常直接使用类名不加后缀
item_id = class_name_safe
else:
# 设备ID使用类名加_device后缀
item_id = f"{class_name_safe}_device"
final_config = {item_id: registry_schema}
yaml_content = yaml.dump(
final_config, allow_unicode=True, default_flow_style=False, Dumper=NoAliasDumper, sort_keys=True
)
# 格式化状态方法信息
status_info = {}
for status_name, status_data in enhanced_info.get("status_methods", {}).items():
status_info[status_name] = {
"return_type": status_data.get("return_type", "未知类型"),
"docstring": status_data.get("docstring", "无描述"),
"is_property": status_data.get("is_property", False),
}
# 格式化动作方法信息
action_info = {}
for action_name, action_data in enhanced_info.get("action_methods", {}).items():
args = action_data.get("args", [])
action_info[action_name] = {
"param_count": len(args),
"params": [
{"name": arg.get("name", ""), "type": arg.get("type", ""), "default": arg.get("default")}
for arg in args
],
"is_async": action_data.get("is_async", False),
"docstring": action_data.get("docstring", "无描述"),
"return_suggestion": "建议返回字典类型 (dict) 以便更好地结构化结果数据",
}
# 准备结果数据(包含详细的类分析信息)
result = {
"registry_schema": yaml_content,
"item_id": item_id,
"registry_type": registry_type,
"class_name": target_class_name,
"module_name": module_name,
"file_path": file_path,
"config_params": {
"safe_class_name": safe_class_name or item_id,
"description": description,
"icon": icon,
"module_prefix": module_prefix,
"full_module_name": full_module_name,
"handles_count": len(handles),
"handles": handles,
},
"class_analysis": {
"class_docstring": enhanced_info.get("class_docstring", ""),
"status_methods": status_info,
"action_methods": action_info,
"init_params": enhanced_info.get("init_params", []),
"dynamic_import_success": enhanced_info.get("dynamic_import_success", False),
},
}
# 发送结果
await send_result(result)
await send_log("注册表生成完成")
except Exception as e:
await send_error(f"分析类信息时发生错误: {str(e)}")
await send_log(f"详细错误信息: {traceback.format_exc()}")
return
except Exception as e:
await send_error(f"处理过程中发生错误: {str(e)}")
await send_log(f"详细错误信息: {traceback.format_exc()}")
@api.get("/file-browser", summary="Browse files and directories", response_model=Resp)
def get_file_browser_data(path: str = ""):
"""获取文件浏览器数据"""
import os
from pathlib import Path
from unilabos.config.config import BasicConfig
try:
# 获取工作目录
working_dir_str = getattr(BasicConfig, "working_dir", None) or os.getcwd()
working_dir = Path(working_dir_str)
# 如果提供了相对路径,则在工作目录下查找
if path:
target_path = working_dir / path
else:
target_path = working_dir
# 确保路径在工作目录内(安全检查)
target_path = target_path.resolve()
if not target_path.exists():
return Resp(code=RespCode.ErrorInvalidReq, message=f"路径不存在: {path}")
if not target_path.is_dir():
return Resp(code=RespCode.ErrorInvalidReq, message=f"不是目录: {path}")
# 获取目录内容
items = []
parent_path = target_path.parent
items.append(
{
"name": "..",
"type": "directory",
"path": str(parent_path),
"size": 0,
"is_parent": True,
}
)
# 获取子目录和文件
try:
for item in sorted(target_path.iterdir(), key=lambda x: (not x.is_dir(), x.name.lower())):
item_type = "directory" if item.is_dir() else "file"
item_info = {
"name": item.name,
"type": item_type,
"path": str(item),
"size": item.stat().st_size if item.is_file() else 0,
"is_python": item.suffix == ".py" if item.is_file() else False,
"is_parent": False,
}
items.append(item_info)
except PermissionError:
return Resp(code=RespCode.ErrorInvalidReq, message="无权限访问此目录")
return Resp(
data={
"current_path": str(target_path),
"working_dir": str(working_dir),
"items": items,
}
)
except Exception as e:
return Resp(code=RespCode.ErrorInvalidReq, message=f"获取目录信息失败: {str(e)}")
@api.get("/resources", summary="Resource list", response_model=Resp)
def get_resources():
"""获取资源列表"""
isok, data = devices()
if not isok:
return Resp(code=RespCode.ErrorHostNotInit, message=str(data))
return Resp(data=dict(data))
@api.get("/devices", summary="Device list", response_model=Resp)
def get_devices():
"""获取设备列表"""
isok, data = devices()
if not isok:
return Resp(code=RespCode.ErrorHostNotInit, message=str(data))
return Resp(data=dict(data))
@api.get("/job/{id}/status", summary="Job status", response_model=JobStatusResp)
def job_status(id: str):
"""获取任务状态"""
data = job_info(id)
return JobStatusResp(data=data)
@api.post("/job/add", summary="Create job", response_model=JobAddResp)
def post_job_add(req: JobAddReq):
"""创建任务"""
device_id = req.device_id
if not req.data:
return Resp(code=RespCode.ErrorInvalidReq, message="Invalid request data")
req.device_id = device_id
data = job_add(req)
return JobAddResp(data=data)
def setup_api_routes(app):
"""设置API路由"""
app.include_router(admin, prefix="/admin/v1", tags=["admin"])
app.include_router(api, prefix="/api/v1", tags=["api"])
# 启动广播任务
@app.on_event("startup")
async def startup_event():
asyncio.create_task(broadcast_device_status())
asyncio.create_task(broadcast_status_page_data())