Discover gists · GitHub
Skip to content

Instantly share code, notes, and snippets.

@DavidLiuAmzn
DavidLiuAmzn / BestieBot.md
Created April 24, 2026 06:11
Bestie Bot prompt

You speak only in Gen Alpha slang, regardless of how the user speaks to you. Use emojis at will, and emphasize. Be friendly: talk to the user like they're your friend. Ask the user to share details so that you can be more helpful. Be humorous - make jokes at will, without distracting from your main message. Don't glaze the user - ask questions that make em think, rather than accepting their word as gospel.

If the user sends a message that's entirely wrapped in quotes ", then translate the message to Gen Alpha slang. Explain terms used.

Don't use terms that just ain't gonna happen or is already old, such as:

  • fetch
  • swag
@fangzhongbao
fangzhongbao / llm-wiki.md
Created April 24, 2026 06:12 — forked from karpathy/llm-wiki.md
llm-wiki

LLM Wiki

A pattern for building personal knowledge bases using LLMs.

This is an idea file, it is designed to be copy pasted to your own LLM Agent (e.g. OpenAI Codex, Claude Code, OpenCode / Pi, or etc.). Its goal is to communicate the high level idea, but your agent will build out the specifics in collaboration with you.

The core idea

Most people's experience with LLMs and documents looks like RAG: you upload a collection of files, the LLM retrieves relevant chunks at query time, and generates an answer. This works, but the LLM is rediscovering knowledge from scratch on every question. There's no accumulation. Ask a subtle question that requires synthesizing five documents, and the LLM has to find and piece together the relevant fragments every time. Nothing is built up. NotebookLM, ChatGPT file uploads, and most RAG systems work this way.

@sigridjineth
sigridjineth / x-audience-warmup_SKILL.md
Created April 14, 2026 09:00
I made a skill for OpenClaw/Hermes Agents from cailynyongyong's HOW TO GO VIRAL ON X
name x-audience-warmup
description Use when starting an X (Twitter) account from scratch, when follower count is below 500, or when preparing an account to receive viral traffic before posting demo videos or trend-hijacking content.

Overview

A cold X account with no followers wastes viral content. This skill covers the warm-up phase: getting from 0 to 500-1000 followers before deploying x-viral-demo-video or x-trend-hijacking posts.

Expected timeline: ~9 months from first post to first viral moment.

@r-karra
r-karra / inter_IIB_questions.md
Last active April 24, 2026 06:11
Inter Maths IIB questions

Mathematics IIB: Long Answer Questions (LAQs)

Practice Set: Circles & Indefinite Integrals (Telangana Intermediate Board)


I. Circles

Question 1

Find the equation of a circle passing through the points $(1, 2), (3, -4),$ and $(5, -6)$.

LLM Wiki

A pattern for building personal knowledge bases using LLMs.

This is an idea file, it is designed to be copy pasted to your own LLM Agent (e.g. OpenAI Codex, Claude Code, OpenCode / Pi, or etc.). Its goal is to communicate the high level idea, but your agent will build out the specifics in collaboration with you.

The core idea

Most people's experience with LLMs and documents looks like RAG: you upload a collection of files, the LLM retrieves relevant chunks at query time, and generates an answer. This works, but the LLM is rediscovering knowledge from scratch on every question. There's no accumulation. Ask a subtle question that requires synthesizing five documents, and the LLM has to find and piece together the relevant fragments every time. Nothing is built up. NotebookLM, ChatGPT file uploads, and most RAG systems work this way.

@yamiroro
yamiroro / llm-wiki.md
Created April 24, 2026 06:09 — forked from karpathy/llm-wiki.md
llm-wiki

LLM Wiki

A pattern for building personal knowledge bases using LLMs.

This is an idea file, it is designed to be copy pasted to your own LLM Agent (e.g. OpenAI Codex, Claude Code, OpenCode / Pi, or etc.). Its goal is to communicate the high level idea, but your agent will build out the specifics in collaboration with you.

The core idea

Most people's experience with LLMs and documents looks like RAG: you upload a collection of files, the LLM retrieves relevant chunks at query time, and generates an answer. This works, but the LLM is rediscovering knowledge from scratch on every question. There's no accumulation. Ask a subtle question that requires synthesizing five documents, and the LLM has to find and piece together the relevant fragments every time. Nothing is built up. NotebookLM, ChatGPT file uploads, and most RAG systems work this way.