Let me first say that the LLM is very limited. I do not know whether it is because of the guardrails that Apple put into place on the models or it has something to do with how the app is built. I am still working on that part. What I do know is that I can have a regular conversation with it and get rudimentary information that the LLM feels comfortable providing. Recipes? Sure. Asking the Apple LLM what the difference is between an astronomer and astrophysicist and I can become an amateur of either or both. I think I'll just listen to Neil Degrasse Tyson on Startalk and try to learn something instead.
Here are some key experiences:
- As far how fast it is, it's slow. I mean it is usable but on a M3 iPad Air, it is going to take some time for the technology to get to the point where users can comfortably use an local LLM on their devices and not feel like they are using an AI that provides a subpar experience. Local AI performance never match those we use on the cloud with powerful Nvidia or Google chips. Or even Apple's upcoming AI chips based on the M5. Never the less, a portable AI agent that can a user can rely on is coming eventually that can handle day-to-day tasks and requests.
- There are a lot more than just an AI-based chat. Right now, it does not seem like each conversation can go very long before the LLM tells me to start a new conversation. I don't know enough about AI and Apple's AI models (or most things about AI for that matter) but there is a limit to the tokens the AI can generate. Also, Apple AI researchers also found their models can collapse if the task of thinking (Apple called it "illusion of thinking") becomes too complex.
- There are clear limits to how a local LLM from Apple right now. I am sure they have bigger better models but nothing that can fit on our iPhones and iPads yet. I am still playing with the app and I am trying to find out of there are ways to get more out of it. I don't know what Apple has planned for the on-device AI but given what is being reported about Apple Intelligence, the delays, and now using Gemini instead of its own models, it is clear that Apple models do not meet Apple's own standards. It means that we might be features that Apple initially wanted to be on device will be farmed out to the cloud.
I will try to see if I can get better results using open source LLM models instead of Apple's own to get better results. Even if I never share my app with you all, I do want to keep pushing the bounds and to use this opportunity to learn about artificial intelligence and coding.
I am certain over the years, your iPhone or Android device will be a whole lot smarter than what we currently use. Duh, right? But it is more than that. I have read about second brains. I think our mobile devices will act as exactly that but in such a way that augments our natural abilities rather than displace them. I really do. At least that is what Apple Intelligence is meant to do. And I hope that is I use agents as tools so that I can move onto other matters. I hope you do that as well and not allow AI to simply displace you.
And it is happen rather quickly whether we like it or not. AI companies are doing it. They're building tools and agents that are smarter so they can do not inly repetitive work but also analytical skills as well. However, rather than simply rely on our agents do think for us, we need to the final decision maker. Not blindly accept any report or analysis the AI creates for us but we need to use our humanity, our experience, and who we individually are to make the final call.
I think by continue to play with the LLM myself, I hope that I can learn more about how AI thinks, what I can trust them to do, and eventually gain even more experience on how to use them.
No comments:
Post a Comment