Running AI on VMware Workstation — Virtualization Review

Running AI on VMware Workstation — Virtualization Review

By By Tom Fenton04/24/2026
Publication Date: 2026-04-24 00:00:00

In-Depth

Running AI on VMware Workstation

In a previous article, I ran LLMs on a Raspberry Pi 5, and although I was able to get it working in less than 5 minutes, it took around 17 minutes to run even the most basic task, thereby making it useless for real-time work. I hoped the limitation of running AI locally was due to the Pi’s underlying hardware, not to running AI locally. To test this, I will run LLMs on my laptop in a virtual machine to test their performance.

Laptop Specifications
The laptop that I will be running it on is an older HP Firefly with an Intel Core i7-8665U. The CPU is an 8th-generation Whiskey Lake mobile processor launched in 2019, designed for laptops and other mobile devices. It has 4 cores, 8 threads, a 1.9 GHz base frequency, and 4.8 GHz in turbo mode. It has 8 MB Intel Smart Cache and integrated UHD Graphics 620, with a 15W TDP. It was designed to be a power-efficient CPU for business, multitasking, and office productivity. The system has 16 GB of RAM and a 477 GB SSD drive.

Testing Framework
For these tests, I will be using VMware Workstation Pro 25H2, a totally free Type 2 hypervisor that runs on Windows or Linux. You can read more about why I like it so much.

The VM I created to run my tests runs Ubuntu 24.04. I gave the VM 3 CPU cores and 12GB…