news 2026/3/18 13:32:47

开箱即用!DASD-4B-Thinking文本生成模型快速体验

作者头像

张小明

前端开发工程师

1.2k 24
文章封面图
开箱即用!DASD-4B-Thinking文本生成模型快速体验

开箱即用!DASD-4B-Thinking文本生成模型快速体验

1. 为什么这个模型值得你花5分钟试试?

你有没有过这样的时刻:

  • 想写一段严谨的数学推导,但卡在中间步骤不知如何展开;
  • 需要生成一段可运行的Python代码来处理实验数据,却反复调试报错;
  • 给学生出一道逻辑清晰、步骤完整的物理题,结果自己先绕晕了……

DASD-4B-Thinking 就是为这类“需要想清楚再动笔”的场景而生的模型。它不是那种张口就来、语义模糊的通用大模型,而是一个专注“长链式思维”(Long-CoT)的思考型选手——它会像一位耐心的理科老师,把推理过程一步步拆解给你看,而不是只甩给你一个答案。

更关键的是:它已经部署好了,不用装环境、不配GPU、不调参数,打开就能用。
这不是一个需要你折腾半天才能跑通的Demo,而是一个真正“开箱即用”的推理工具。本文将带你跳过所有技术弯路,直奔核心体验:从确认服务就绪,到完成一次有深度的数学推理提问,全程控制在3分钟内。

我们不讲蒸馏原理,不列参数表格,不对比benchmark分数。只做一件事:让你亲手验证——它到底能不能把“思考过程”真的呈现出来。

2. 确认服务已就绪:两行命令搞定验证

在你开始提问前,得先确认后端模型服务确实在稳定运行。这一步非常简单,不需要任何额外工具或权限。

2.1 查看服务日志,确认加载完成

打开镜像提供的 WebShell 终端,执行以下命令:

cat /root/workspace/llm.log

如果看到类似这样的输出(关键信息已加粗标出):

INFO 08-29 14:22:37 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b3c' with prompt length 12 tokens. INFO 08-29 14:22:38 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b3d' with prompt length 15 tokens. INFO 08-29 14:22:39 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b3e' with prompt length 18 tokens. INFO 08-29 14:22:40 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b3f' with prompt length 21 tokens. INFO 08-29 14:22:41 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b40' with prompt length 24 tokens. INFO 08-29 14:22:42 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b41' with prompt length 27 tokens. INFO 08-29 14:22:43 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b42' with prompt length 30 tokens. INFO 08-29 14:22:44 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b43' with prompt length 33 tokens. INFO 08-29 14:22:45 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b44' with prompt length 36 tokens. INFO 08-29 14:22:46 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b45' with prompt length 39 tokens. INFO 08-29 14:22:47 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b46' with prompt length 42 tokens. INFO 08-29 14:22:48 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b47' with prompt length 45 tokens. INFO 08-29 14:22:49 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b48' with prompt length 48 tokens. INFO 08-29 14:22:50 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b49' with prompt length 51 tokens. INFO 08-29 14:22:51 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b4a' with prompt length 54 tokens. INFO 08-29 14:22:52 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b4b' with prompt length 57 tokens. INFO 08-29 14:22:53 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b4c' with prompt length 60 tokens. INFO 08-29 14:22:54 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b4d' with prompt length 63 tokens. INFO 08-29 14:22:55 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b4e' with prompt length 66 tokens. INFO 08-29 14:22:56 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b4f' with prompt length 69 tokens. INFO 08-29 14:22:57 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b50' with prompt length 72 tokens. INFO 08-29 14:22:58 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b51' with prompt length 75 tokens. INFO 08-29 14:22:59 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b52' with prompt length 78 tokens. INFO 08-29 14:23:00 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b53' with prompt length 81 tokens. INFO 08-29 14:23:01 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b54' with prompt length 84 tokens. INFO 08-29 14:23:02 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b55' with prompt length 87 tokens. INFO 08-29 14:23:03 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b56' with prompt length 90 tokens. INFO 08-29 14:23:04 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b57' with prompt length 93 tokens. INFO 08-29 14:23:05 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b58' with prompt length 96 tokens. INFO 08-29 14:23:06 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b59' with prompt length 99 tokens. INFO 08-29 14:23:07 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b5a' with prompt length 102 tokens. INFO 08-29 14:23:08 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b5b' with prompt length 105 tokens. INFO 08-29 14:23:09 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b5c' with prompt length 108 tokens. INFO 08-29 14:23:10 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b5d' with prompt length 111 tokens. INFO 08-29 14:23:11 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b5e' with prompt length 114 tokens. INFO 08-29 14:23:12 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b5f' with prompt length 117 tokens. INFO 08-29 14:23:13 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b60' with prompt length 120 tokens. INFO 08-29 14:23:14 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b61' with prompt length 123 tokens. INFO 08-29 14:23:15 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b62' with prompt length 126 tokens. INFO 08-29 14:23:16 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b63' with prompt length 129 tokens. INFO 08-29 14:23:17 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b64' with prompt length 132 tokens. INFO 08-29 14:23:18 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b65' with prompt length 135 tokens. INFO 08-29 14:23:19 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b66' with prompt length 138 tokens. INFO 08-29 14:23:20 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b67' with prompt length 141 tokens. INFO 08-29 14:23:21 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b68' with prompt length 144 tokens. INFO 08-29 14:23:22 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b69' with prompt length 147 tokens. INFO 08-29 14:23:23 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b6a' with prompt length 150 tokens. INFO 08-29 14:23:24 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b6b' with prompt length 153 tokens. INFO 08-29 14:23:25 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b6c' with prompt length 156 tokens. INFO 08-29 14:23:26 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b6d' with prompt length 159 tokens. INFO 08-29 14:23:27 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b6e' with prompt length 162 tokens. INFO 08-29 14:23:28 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b6f' with prompt length 165 tokens. INFO 08-29 14:23:29 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b70' with prompt length 168 tokens. INFO 08-29 14:23:30 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b71' with prompt length 171 tokens. INFO 08-29 14:23:31 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b72' with prompt length 174 tokens. INFO 08-29 14:23:32 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b73' with prompt length 177 tokens. INFO 08-29 14:23:33 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b74' with prompt length 180 tokens. INFO 08-29 14:23:34 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b75' with prompt length 183 tokens. INFO 08-29 14:23:35 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b76' with prompt length 186 tokens. INFO 08-29 14:23:36 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b77' with prompt length 189 tokens. INFO 08-29 14:23:37 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b78' with prompt length 192 tokens. INFO 08-29 14:23:38 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b79' with prompt length 195 tokens. INFO 08-29 14:23:39 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b7a' with prompt length 198 tokens. INFO 08-29 14:23:40 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b7b' with prompt length 201 tokens. INFO 08-29 14:23:41 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b7c' with prompt length 204 tokens. INFO 08-29 14:23:42 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b7d' with prompt length 207 tokens. INFO 08-29 14:23:43 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b7e' with prompt length 210 tokens. INFO 08-29 14:23:44 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b7f' with prompt length 213 tokens. INFO 08-29 14:23:45 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b80' with prompt length 216 tokens. INFO 08-29 14:23:46 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b81' with prompt length 219 tokens. INFO 08-29 14:23:47 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b82' with prompt length 222 tokens. INFO 08-29 14:23:48 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b83' with prompt length 225 tokens. INFO 08-29 14:23:49 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b84' with prompt length 228 tokens. INFO 08-29 14:23:50 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b85' with prompt length 231 tokens. INFO 08-29 14:23:51 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b86' with prompt length 234 tokens. INFO 08-29 14:23:52 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b87' with prompt length 237 tokens. INFO 08-29 14:23:53 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b88' with prompt length 240 tokens. INFO 08-29 14:23:54 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b89' with prompt length 243 tokens. INFO 08-29 14:23:55 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b8a' with prompt length 246 tokens. INFO 08-29 14:23:56 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b8b' with prompt length 249 tokens. INFO 08-29 14:23:57 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b8c' with prompt length 252 tokens. INFO 08-29 14:23:58 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b8d' with prompt length 255 tokens. INFO 08-29 14:23:59 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b8e' with prompt length 258 tokens. INFO 08-29 14:24:00 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b8f' with prompt length 261 tokens. INFO 08-29 14:24:01 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b90' with prompt length 264 tokens. INFO 08-29 14:24:02 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b91' with prompt length 267 tokens. INFO 08-29 14:24:03 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b92' with prompt length 270 tokens. INFO 08-29 14:24:04 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b93' with prompt length 273 tokens. INFO 08-29 14:24:05 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b94' with prompt length 276 tokens. INFO 08-29 14:24:06 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b95' with prompt length 279 tokens. INFO 08-29 14:24:07 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b96' with prompt length 282 tokens. INFO 08-29 14:24:08 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b97' with prompt length 285 tokens. INFO 08-29 14:24:09 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b98' with prompt length 288 tokens. INFO 08-29 14:24:10 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b99' with prompt length 291 tokens. INFO 08-29 14:24:11 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b9a' with prompt length 294 tokens. INFO 08-29 14:24:12 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b9b' with prompt length 297 tokens. INFO 08-29 14:24:13 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b9c' with prompt length 300 tokens. INFO 08-29 14:24:14 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b9d' with prompt length 303 tokens. INFO 08-29 14:24:15 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b9e' with prompt length 306 tokens. INFO 08-29 14:24:16 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2b9f' with prompt length 309 tokens. INFO 08-29 14:24:17 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2ba0' with prompt length 312 tokens. INFO 08-29 14:24:18 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2ba1' with prompt length 315 tokens. INFO 08-29 14:24:19 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2ba2' with prompt length 318 tokens. INFO 08-29 14:24:20 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2ba3' with prompt length 321 tokens. INFO 08-29 14:24:21 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2ba4' with prompt length 324 tokens. INFO 08-29 14:24:22 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2ba5' with prompt length 327 tokens. INFO 08-29 14:24:23 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2ba6' with prompt length 330 tokens. INFO 08-29 14:24:24 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2ba7' with prompt length 333 tokens. INFO 08-29 14:24:25 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2ba8' with prompt length 336 tokens. INFO 08-29 14:24:26 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2ba9' with prompt length 339 tokens. INFO 08-29 14:24:27 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2baa' with prompt length 342 tokens. INFO 08-29 14:24:28 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2bab' with prompt length 345 tokens. INFO 08-29 14:24:29 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2bac' with prompt length 348 tokens. INFO 08-29 14:24:30 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2bad' with prompt length 351 tokens. INFO 08-29 14:24:31 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2bae' with prompt length 354 tokens. INFO 08-29 14:24:32 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2baf' with prompt length 357 tokens. INFO 08-29 14:24:33 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2bb0' with prompt length 360 tokens. INFO 08-29 14:24:34 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2bb1' with prompt length 363 tokens. INFO 08-29 14:24:35 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2bb2' with prompt length 366 tokens. INFO 08-29 14:24:36 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2bb3' with prompt length 369 tokens. INFO 08-29 14:24:37 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2bb4' with prompt length 372 tokens. INFO 08-29 14:24:38 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2bb5' with prompt length 375 tokens. INFO 08-29 14:24:39 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2bb6' with prompt length 378 tokens. INFO 08-29 14:24:40 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2bb7' with prompt length 381 tokens. INFO 08-29 14:24:41 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2bb8' with prompt length 384 tokens. INFO 08-29 14:24:42 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2bb9' with prompt length 387 tokens. INFO 08-29 14:24:43 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2bba' with prompt length 390 tokens. INFO 08-29 14:24:44 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2bbb' with prompt length 393 tokens. INFO 08-29 14:24:45 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2bbc' with prompt length 396 tokens. INFO 08-29 14:24:46 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2bbd' with prompt length 399 tokens. INFO 08-29 14:24:47 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2bbe' with prompt length 402 tokens. INFO 08-29 14:24:48 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2bbf' with prompt length 405 tokens. INFO 08-29 14:24:49 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2bc0' with prompt length 408 tokens. INFO 08-29 14:24:50 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2bc1' with prompt length 411 tokens. INFO 08-29 14:24:51 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2bc2' with prompt length 414 tokens. INFO 08-29 14:24:52 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2bc3' with prompt length 417 tokens. INFO 08-29 14:24:53 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2bc4' with prompt length 420 tokens. INFO 08-29 14:24:54 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2bc5' with prompt length 423 tokens. INFO 08-29 14:24:55 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2bc6' with prompt length 426 tokens. INFO 08-29 14:24:56 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2bc7' with prompt length 429 tokens. INFO 08-29 14:24:57 [vllm/engine/llm_engine.py:262] Added request 'req-7f8a2bc8' with prompt length 432 tokens. INFO 08-29 14:24:58 [vllm/engine/llm_engine.py:262
版权声明: 本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若内容造成侵权/违法违规/事实不符,请联系邮箱:809451989@qq.com进行投诉反馈,一经查实,立即删除!
网站建设 2026/3/14 22:04:37

Qwen3-Reranker-0.6B镜像部署:免conda环境、免手动编译的纯Docker方案

Qwen3-Reranker-0.6B镜像部署:免conda环境、免手动编译的纯Docker方案 你是不是也经历过这样的困扰:想快速试用一个新发布的重排序模型,结果卡在环境配置上——装conda、配Python版本、编译vLLM、解决CUDA兼容性……折腾半天,连服…

作者头像 李华
网站建设 2026/3/17 16:10:15

浏览器微信工具评测:企业环境下的网页版微信解决方案

浏览器微信工具评测:企业环境下的网页版微信解决方案 【免费下载链接】wechat-need-web 让微信网页版可用 / Allow the use of WeChat via webpage access 项目地址: https://gitcode.com/gh_mirrors/we/wechat-need-web 在企业办公环境中,安装软…

作者头像 李华
网站建设 2026/3/17 3:21:18

VibeVoice实战:快速搭建多语言AI语音助手教程

VibeVoice实战:快速搭建多语言AI语音助手教程 你是否试过用AI生成一段三分钟的会议纪要朗读,结果卡在2分17秒突然变声?是否想为跨境电商产品页配上德语日语双语解说,却困在音色切换生硬、语调不自然的泥潭里?又或者&a…

作者头像 李华
网站建设 2026/3/14 2:19:32

ComfyUI视频合成进阶指南:AI动画创作的高效工作流

ComfyUI视频合成进阶指南:AI动画创作的高效工作流 【免费下载链接】ComfyUI-VideoHelperSuite Nodes related to video workflows 项目地址: https://gitcode.com/gh_mirrors/co/ComfyUI-VideoHelperSuite 在数字内容创作的浪潮中,视频合成技巧已…

作者头像 李华
网站建设 2026/3/15 8:48:54

ncmdump:让NCM格式转换效率提升90%的全场景指南

ncmdump:让NCM格式转换效率提升90%的全场景指南 【免费下载链接】ncmdump 项目地址: https://gitcode.com/gh_mirrors/ncmd/ncmdump ncmdump作为一款轻量级NCM格式转换工具,能帮助用户快速解决音频文件格式兼容问题。本文将从用户实际场景出发&a…

作者头像 李华
网站建设 2026/3/13 12:28:02

【问题终结】AI绘画插件控制层失效?Clip模型修复全攻略

【问题终结】AI绘画插件控制层失效?Clip模型修复全攻略 【免费下载链接】krita-ai-diffusion Streamlined interface for generating images with AI in Krita. Inpaint and outpaint with optional text prompt, no tweaking required. 项目地址: https://gitcod…

作者头像 李华