Slow but Local LLM Are Good For A Lot of Things
I vibe coded a local LLM running on my M3 iPad Air using Apple's models and it works. Sort of. I had known that Apple has its own models that run devices to power Apple Intelligence. How well does it work? Definitely. Am I replacing the app I created instead of ChatGPT or Gemini? No way. Not even close. Not because I created some great AI chat app. In fact, it is missing a few important features that users have come to expect from an AI app. I am still fine-tuning it and I do not know if I will be sharing it with the world because I do not know how Apple feels about building an app around their models for people to use. What I do want to discuss here that it is surprisingly useful. Let me first say that the LLM is very limited. I do not know whether it is because of the guardrails that Apple put into place on the models or it has something to do with how the app is built. I am still working on that part. What I do know is that I can have a regular conversation with it and ge...