基于Win10 + WSL2 + Ubuntu22.04的AI探索(一)
在 [https://developer.nvidia.com/rdp/cudnn-archive) 根据操作系统及CUDA版本下载并安装cuDNN。CoPaw默认绑定127.0.0.1地址,修改配置文件后还是被覆盖,所以改用NG做代理。在https://www.modelscope.cn/models下载合适的模型。根据操作系统及CUDA版本,下载并执行对应的run file。意在利用子系统隔离
基于Win10 + WSL2 + Ubuntu22.04的AI探索
架构图

在WSL2安装多个Ubuntu子系统
意在利用子系统隔离不同的AI探索项目,避免依赖冲突等问题
1. 安装Ubuntu22.04
wsl --install -d Ubuntu-22.04
2. 初始环境
sudo vim /etc/wsl.conf
[network]
hostname = 新主机名
generateHosts = false
generateResolvConf = false
[user]
default = root
sudo vi /etc/hosts
127.0.1.1 新主机名.localdomain 新主机名
sudo vi /etc/resolv.conf
nameserver 8.8.8.8
nameserver 8.8.4.4
sudo vi /etc/systemd/resolved.conf
[Resolve]
DNS=8.8.8.8
sudo systemctl restart systemd-resolved
sudo systemctl restart NetworkManager
3. 更新系统,安装依赖包
sudo apt update && sudo apt upgrade -y
sudo apt install -y net-tools network-manager zstd build-essential
sudo apt install -y cmake libcurl4-openssl-dev checkinstall git curl unzip
sudo ln -fs /bin/bash /bin/sh
4. 更新cmake 3.28
wget https://github.com/Kitware/CMake/releases/download/v3.28.3/cmake-3.28.3-linux-x86_64.sh
chmod +x cmake-3.28.3-linux-x86_64.sh
sudo ./cmake-3.28.3-linux-x86_64.sh --skip-license --prefix=/usr/local
# 备份旧版本的 cmake 链接(可选,但建议做)
sudo mv /usr/bin/cmake /usr/bin/cmake.old
# 创建新版本的软链接(指向 /usr/local/bin/cmake)
sudo ln -s /usr/local/bin/cmake /usr/bin/cmake
# 同理,更新 cpack、ctest 等相关工具(避免后续报错)
sudo mv /usr/bin/cpack /usr/bin/cpack.old
sudo ln -s /usr/local/bin/cpack /usr/bin/cpack
sudo mv /usr/bin/ctest /usr/bin/ctest.old
sudo ln -s /usr/local/bin/ctest /usr/bin/ctest
cmake --version
5. 导出为基础版本
#wsl --export [子系统名] "导出目标路径"
wsl --export Ubuntu-22.04 E:\WSL\Ubuntu-22.04.tar
6. 利用WSL导入功能创建新的子系统
wsl --import [子系统名] "子系统路径" "导入来源路径"
wsl --import Ubuntu-22.04-llamacpp "E:\WSL\Ubuntu-22.04-llamacpp" "E:\WSL\Ubuntu-22.04.tar"
7. 与宿主机的网络端口映射
netsh interface portproxy add v4tov4 listenport=[宿主机监听端口] listenaddress=0.0.0.0 connectport=[子系统端口] connectaddress=[子系统IP]
8. 安装新版本的Node.js和npm
#如果之前通过 apt 安装过旧版本 Node.js,建议先卸载避免冲突
sudo apt remove -y nodejs npm
sudo apt autoremove -y
# 下载并安装 nvm(使用官方脚本,版本号可能更新,以官网为准)
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.7/install.sh | bash
source ~/.bashrc
# 输出版本号即成功
nvm --version
# 安装 Node.js 20.x LTS(会自动安装对应版本的 npm)
nvm install 20
# 设置 20.x 为默认版本(避免重启终端后版本切换)
nvm alias default 20
# 应输出 v20.x.x(如 v20.17.0)
node -v
# 应输出对应的 npm 版本(如 10.8.2)
npm -v
#安装pnpm
npm install -g pnpm
pnpm --version
9. 安装uv
curl -LsSf https://astral.sh/uv/install.sh | sh
echo 'export PATH="$HOME/.local/bin:$PATH"' >> ~/.bashrc
source ~/.bashrc
安装CUDA,cuDNN,NCCL,torch
1. 安装CUDA Toolkit
查询CUDA版本
nvidia-smi
在 https://developer.nvidia.com/cuda-downloads 根据操作系统及CUDA版本,下载并执行对应的run file
#13.2
wget https://developer.download.nvidia.com/compute/cuda/13.2.0/local_installers/cuda_13.2.0_595.45.04_linux.run
sudo sh cuda_13.2.0_595.45.04_linux.run
#12.8
wget https://developer.download.nvidia.com/compute/cuda/12.8.0/local_installers/cuda_12.8.0_570.86.10_linux.run
sudo sh cuda_12.8.0_570.86.10_linux.run
#12.9
wget https://developer.download.nvidia.com/compute/cuda/12.9.0/local_installers/cuda_12.9.0_575.51.03_linux.run
sudo sh cuda_12.9.0_575.51.03_linux.run
2. 配置环境变量
#13.2
echo 'export CUDA_HOME=/usr/local/cuda-13.2' >> ~/.bashrc
#12.8
echo 'export CUDA_HOME=/usr/local/cuda-12.8' >> ~/.bashrc
#12.9
echo 'export CUDA_HOME=/usr/local/cuda-12.9' >> ~/.bashrc
echo 'export PATH=$PATH:${CUDA_HOME}/bin' >> ~/.bashrc
echo 'export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:${LD_LIBRARY_PATH}/lib64' >> ~/.bashrc
echo 'export PATH=$PATH:/home/ubuntu/.local/bin' >> ~/.bashrc
source ~/.bashrc
3. 检查nvcc版本
nvcc --version
4. 安装cuDNN
在 [https://developer.nvidia.com/rdp/cudnn-archive) 根据操作系统及CUDA版本下载并安装cuDNN
#解压缩
tar -xvf cudnn-linux-x86_64-8.9.7.29_cuda12-archive.tar.xz
#复制到cuda目录
#13.2
sudo cp cudnn-linux-x86_64-8.9.7.29_cuda12-archive/include/cudnn* /usr/local/cuda-13.2/include
sudo cp -P cudnn-linux-x86_64-8.9.7.29_cuda12-archive/lib/libcudnn* /usr/local/cuda-13.2/lib64
#12.8
sudo cp cudnn-linux-x86_64-8.9.7.29_cuda12-archive/include/cudnn* /usr/local/cuda-12.8/include
sudo cp -P cudnn-linux-x86_64-8.9.7.29_cuda12-archive/lib/libcudnn* /usr/local/cuda-12.8/lib64
#12.9
sudo cp cudnn-linux-x86_64-8.9.7.29_cuda12-archive/include/cudnn* /usr/local/cuda-12.9/include
sudo cp -P cudnn-linux-x86_64-8.9.7.29_cuda12-archive/lib/libcudnn* /usr/local/cuda-12.9/lib64
#修改文件权限
#13.2
sudo chmod a+r /usr/local/cuda-13.2/include/cudnn*.h /usr/local/cuda-13.2/lib64/libcudnn*
#12.8
sudo chmod a+r /usr/local/cuda-12.8/include/cudnn*.h /usr/local/cuda-12.8/lib64/libcudnn*
#12.8
sudo chmod a+r /usr/local/cuda-12.9/include/cudnn*.h /usr/local/cuda-12.9/lib64/libcudnn*
#显示版本表示安装成功
cat /usr/local/cuda/include/cudnn_version.h | grep CUDNN_MAJOR -A 2
5. 安装NCCL
在 [https://developer.nvidia.com/nccl/nccl-download/) 根据操CUDA版本下载并安装NCCL
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.1-1_all.deb
sudo dpkg -i cuda-keyring_1.1-1_all.deb
sudo apt update
#13.2
sudo apt install libnccl2=2.30.3-1+cuda13.2 libnccl-dev=2.30.3-1+cuda13.2
#12.8
sudo apt install libnccl2=2.26.2-1+cuda12.8 libnccl-dev=2.26.2-1+cuda12.8
#12.9
sudo apt install libnccl2=2.30.3-1+cuda12.9 libnccl-dev=2.30.3-1+cuda12.9
6. 安装torch
#13.2
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu130
#12.8
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu128
#12.9
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu129
7. 验证脚本
import torch
import platform
def get_system_info():
return {
"系统": platform.system(),
"Python 版本": platform.python_version(),
"PyTorch 版本": torch.__version__,
"CUDA 可用": torch.cuda.is_available(),
"CUDA 版本": torch.version.cuda,
"MPS 可用": hasattr(torch.backends, "mps") and torch.backends.mps.is_available(),
"显卡信息": torch.cuda.get_device_name(0) if torch.cuda.is_available() else "无"
}
def test_mps():
if not torch.backends.mps.is_available():
if not torch.backends.mps.is_built():
print("MPS 不可用,因为当前的 PyTorch 安装未启用 MPS。")
else:
print("MPS 不可用,因为当前的 MacOS 版本不是 12.3+,或者此机器上没有启用 MPS 的设备。")
else:
mps_device = torch.device("mps")
# 在 mps 设备上创建一个张量
x = torch.ones(5, device=mps_device)
# 或者
x = torch.ones(5, device="mps")
# 任何操作都在 GPU 上进行
y = x * 2
# 将模型移动到 mps 设备
model = YourFavoriteNet()
model.to(mps_device)
# 现在每次调用都在 GPU 上运行
pred = model(x)
if __name__ == "__main__":
info = get_system_info()
for k, v in info.items():
print(f"{k}: {v}")
test_mps()
本地部署Ollama
1. 安装ollama
curl -fsSL https://ollama.com/install.sh | sh
2. 配置服务
sudo vi /etc/systemd/system/ollama.service
# 文件内容
[Unit]
Description=Ollama Service
After=network-online.target
[Service]
ExecStart=/usr/bin/ollama serve
User=ollama
Group=ollama
Restart=always
RestartSec=3
Environment="PATH=$PATH"
Environment="OLLAMA_HOST=0.0.0.0"
Environment="OLLAMA_ORIGINS=*"
[Install]
WantedBy=default.target
# 重载配置
sudo systemctl daemon-reload
# 启动服务
sudo systemctl start ollama.service
# 查看服务状态
sudo systemctl status ollama.service
# 设置服务开机自启动
sudo systemctl enable ollama.service
3. 下载模型
ollama pull qwen3.5:35b
4. 安装Ngnix
#安装 Nginx
sudo apt install nginx -y
#启动 Nginx 并设置开机自启
sudo systemctl start nginx
sudo systemctl enable nginx
#验证 Nginx 是否运行,如果 Nginx 正常运行,你应该看到 `active (running)` 状态。
sudo systemctl status nginx
5. 配置 API Key 验证
sudo tee /etc/nginx/conf.d/ollama.conf <<'EOF'
server {
listen 9180;
location / {
if ($http_authorization != "[API KEY]") {
return 403;
}
proxy_pass http://localhost:11434;
}
}
EOF
6. 设置宿主机端口映射
netsh interface portproxy add v4tov4 listenport=9180 listenaddress=0.0.0.0 connectport=9180 connectaddress=172.20.149.74
本地部署Llama.cpp
1. 安装llamp.cpp
# 克隆仓库
cd /usr/local
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
# 使用 CMake 编译(推荐方式)
mkdir build && cd build
# 编译时启用 CUDA
cmake .. -DLLAMA_CUDA=ON
cmake --build . --config Release -j$(nproc)
export PATH=$PATH:/usr/local/llama.cpp/build/bin
source ~/.bashrc
2. 安装modelscope
sudo apt install python3-pip
pip install modelscope -i https://pypi.tuna.tsinghua.edu.cn/simple
export PATH=$PATH:/home/ubuntu/.local/bin
source ~/.bashrc
3. 从modescope下载模型
在https://www.modelscope.cn/models下载合适的模型
modelscope download --model [模型合集] [模型] --local_dir [下载路径]
modelscope download --model Qwen/Qwen3.5-27B-FP8 README.md --local_dir /usr/local/llama.cpp/build/models
4. 运行模型
# 回到 build 目录
cd /usr/local/llama.cpp/build/bin
# 基础运行方式
./llama-cli \
-m ~/models/Llama-3.2-1B-Instruct-Q4_K_M.gguf \
-p "你好,请介绍一下自己" \
-n 512
# 交互式聊天模式
./llama-cli \
-m ~/models/Llama-3.2-1B-Instruct-Q4_K_M.gguf \
--chat-template llama3 \
-cnv
# 启动 HTTP API 服务器
./llama-server \
-m ~/models/Llama-3.2-1B-Instruct-Q4_K_M.gguf \
--host 0.0.0.0 \
--port 8080 \
-c 4096
5. 配置服务
sudo vi /etc/systemd/system/llama-server.service
设置API端口为9191
[Unit]
Description=llama.cpp HTTP Server
After=network.target
[Service]
Type=simple
User=llama
Group=llama
WorkingDirectory=/usr/local/llama.cpp
ExecStart=/usr/local/llama.cpp/build/bin/llama-server \
-m /usr/local/llama.cpp/build/models/modei_file_name.guff \
--port 9191 \
--host 0.0.0.0 \
-c 163840 \
-np 4 \
--threads 12 \
--n-gpu-layers 35 \
--cont-batching \
-ngl 99999 \
-b 4096
Restart=always
RestartSec=5
Environment=LD_LIBRARY_PATH=/usr/local/cuda/lib64
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
6. 安装Ngnix
#安装 Nginx
sudo apt install nginx -y
#启动 Nginx 并设置开机自启
sudo systemctl start nginx
sudo systemctl enable nginx
#验证 Nginx 是否运行,如果 Nginx 正常运行,你应该看到 `active (running)` 状态。
sudo systemctl status nginx
7. 配置 API Key 验证
sudo tee /etc/nginx/conf.d/llamacpp.conf <<'EOF'
server {
listen 9280;
location / {
if ($http_authorization != "[API KEY]") {
return 403;
}
proxy_pass http://localhost:9191;
}
}
EOF
8. 设置宿主机端口映射
netsh interface portproxy add v4tov4 listenport=9280 listenaddress=0.0.0.0 connectport=9280 connectaddress=172.20.149.74
本地部署OpenClaw
本地部署CoPaw
1. 安装CoPaw
curl -fsSL https://copaw.agentscope.io/install.sh | bash
2. 初始化CoPaw
/home/ubuntu/.local/bin/copaw init --defaults
/home/ubuntu/.local/bin/copaw app
3. 启动CoPaw
/home/ubuntu/.local/bin/copaw app
4. 打开控制台
http://127.0.0.1:8088
5. 创建service
sudo tee /etc/systemd/system/copaw.service <<EOF
[Unit]
Description=CoPaw Inference Service
After=network.target
[Service]
Type=simple
User=ubuntu
Group=ubuntu
Restart=always
WorkingDirectory=/home/ubuntu/.copaw
ExecStart=/home/ubuntu/.local/bin/copaw app
ExecStop=/home/ubuntu/.local/bin/copaw shutdown
Restart=always
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
sudo systemctl start copaw.service
sudo systemctl enable copaw.service
6. 安装Ngnix
#安装 Nginx
sudo apt install nginx -y
#启动 Nginx 并设置开机自启
sudo systemctl start nginx
sudo systemctl enable nginx
#验证 Nginx 是否运行,如果 Nginx 正常运行,你应该看到 `active (running)` 状态。
sudo systemctl status nginx
7. 配置代理
CoPaw默认绑定127.0.0.1地址,修改配置文件后还是被覆盖,所以改用NG做代理
sudo tee /etc/nginx/conf.d/copaw.conf <<'EOF'
server {
listen 18088;
location / {
proxy_pass http://localhost:8088;
}
}
EOF
8. 设置宿主机端口映射
netsh interface portproxy add v4tov4 listenport=18088 listenaddress=0.0.0.0 connectport=18088 connectaddress=172.20.149.74
openEuler 是由开放原子开源基金会孵化的全场景开源操作系统项目,面向数字基础设施四大核心场景(服务器、云计算、边缘计算、嵌入式),全面支持 ARM、x86、RISC-V、loongArch、PowerPC、SW-64 等多样性计算架构
更多推荐

所有评论(0)