We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.
You must be logged in to block users.
Contact GitHub support about this user’s behavior. Learn more about reporting abuse.
Instruct-tune LLaMA on consumer hardware
Jupyter Notebook 18.9k 2.2k
Forked from ggml-org/llama.cpp
Locally run an Instruction-Tuned Chat-Style LLM
C 10.2k 860
Forked from meta-llama/llama
Quantized inference code for LLaMA models
Python 1k 97
Code for "Stochastic Optimization of Sorting Networks using Continuous Relaxations", ICLR 2019.
Python 150 27
There was an error while loading. Please reload this page.