bl3.dev
Britt Lewis crafts software experiences.
with over a decade of web and mobile expertise working in startups, he is a product- and user-minded technologist with design sensibilities.
if you’d like to work with Britt, shoot him a quick email: [email protected]
cnvrs
private & local AI, right on your phone or computer.
- download & chat with thousands of models from 🤗HuggingFace1 in
GGUF
format2 - search & continue past conversations
- create characters with your own custom instructions
- try a character from the built-in gallery:
- "Ye Olde AI", a Shakespearean character;
- "Quacks", a creative AI to help work though problems; or
- "Coder", an AI focused on design and development of software.
- chat with PDFs or other text-based files
- GPU-accelerated AI with Metal3, powered by the LLaMA.cpp project4
for Apple devices on iOS 17, iPadOS 17, macOS 14, or visionOS 1 or later.
join the beta on testflight, and follow for updates.
roamer
the full Roam Research experience, on the go.
- easily access encrypted graphs with Face ID
- floating "Append Block" button, for daily notes pages or zoomed blocks
- dedicated menu for Roam-specific actions like command palette & reload page
- automatic dark mode support
for Apple devices on iOS 16, iPadOS 16, macOS 13, or visionOS 1 or later.
coming soon. reach out to join the beta.
HuggingFace is the top place to discover new models: a favorite for researchers and hackers alike, to share their experiments. You’ll find models’ raw weights, datasets for training and evaluating models, model demos called "Spaces", and much more.
according to ggml project docs:
GGUF is a file format for storing models for inference with GGML and executors based on GGML. GGUF is a binary format that is designed for fast loading and saving of models, and for ease of reading. Models are traditionally developed using PyTorch or another framework, and then converted to GGUF for use in GGML.
Metal is Apple’s low-level GPU programming interface.
Georgi Gerganov’s LLaMA.cpp is a software library for running AI models like LLMs, converting them into its open GGUF2 format, and compressing them to run more easily on memory-constrained devices, known as "quantization."