{{ message }}
On-device LoRA Fine-tuning Support for LLM Models on Android/iOS #27365
Unanswered
vikaspatel7780
asked this question in
Mobile Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment

Uh oh!
There was an error while loading. Please reload this page.
-
Hello,
I am exploring the possibility of performing on-device learning for LLM models, specifically focusing on fine-tuning LoRA adapters rather than retraining the full model.
I understand that mobile devices have hardware limitations, but I would like clarification currently supports LoRA fine-tuning directly on Android or iOS devices. Additionally, is there any support or roadmap for utilizing mobile GPUs or NPUs for such training tasks?
If on-device LoRA fine-tuning is not currently feasible, is the recommended approach to train LoRA adapters on server/cloud hardware and then deploy them for runtime inference on mobile devices?
Any guidance or references would be greatly appreciated. Thank you!
Beta Was this translation helpful? Give feedback.
All reactions