I tested the local AI on my M1 Mac expecting magic – and got a reality check instead

I tested the local AI on my M1 Mac expecting magic – and got a reality check instead

By Tiernan Ray
Publication Date: 2026-02-01 12:30:00

The M1 MacBook Pro is an old but still powerful device in 2026.

Kyle Kucharski/ZDNET

Follow ZDNET: Add us as a preferred source on Google.


Key insights from ZDNET

  • Ollama makes it pretty easy to download open source LLMs.
  • Even small models can run painfully slow.
  • Don’t try this without a new computer with 32GB of RAM.

Having worked as a reporter on artificial intelligence for over a decade, I always knew that the use of artificial intelligence presented all sorts of challenges in computing. For one thing, large language models are getting bigger and need more and more DRAM memory to run the “parameters” or “neural weights” of their models.

Also: How to Install an LLM on MacOS (and Why You Should Do It)

I knew all this, but I wanted to get a first-hand feel for it. I wanted to run a large language model on my home computer.

Now, downloading and running an AI model can require a lot of work setting up the “environment”. Inspired by my colleague Jack Wallen’s reporting on…