By By Tom Fenton04/06/2026
Publication Date: 2026-04-06 00:00:00
Running AI Natively on Windows 11 Using an eGPU
In this series of
articles, I am investigating how to, and how well, AI Large
Language Models (LLMs) work on relatively low-powered devices,
including a Raspberry Pi and on a Linux virtual machine (VM) using
VMware Workstation. I used the LLM in a VM because I wanted a sandbox
environment to test it and the ability to upload the VM to vSphere
for production. The VM ran the LLM far better than on the Pi. To
simplify things, I ran the LLMs using Ollama.
Ollama can run
natively on Windows, Linux, and macOS, and I thought it would be
interesting to see how well it performs on Windows compared to in a
VM. This is far from being an apples-to-apples comparison, as
I was able to dedicate only three of the CPU’s four cores and 12GB of
RAM to the virtual machine. Running it natively on Windows, I needed
to dedicate all CPU cores and memory to the test.
Installing Ollama on Windows
I am impressed with Ollama; installing it on Linux was dead simple. However, I was a bit cautious about installing it on Windows, but after reviewing its documentation, it seemed straightforward.
From a usability standpoint, Ollama appears to support both CPU-only systems and GPU-equipped systems, provided that the required…