🦖 X—LLM: Cutting Edge & Easy LLM Finetuning
-
Updated
Jan 17, 2024 - Python
🦖 X—LLM: Cutting Edge & Easy LLM Finetuning
PHP library for interacting with AI platform provider.
A robust Node.js proxy server that automatically rotates API keys for Gemini and OpenAI APIs when rate limits (429 errors) are encountered. Built with zero dependencies and comprehensive logging.
🦖 X—LLM: Simple & Cutting Edge LLM Finetuning
This repository features an example of how to utilize the xllm library. Included is a solution for a common type of assessment given to LLM engineers
A solution that could prioritize patients based on urgency, reducing wait times and ensuring those who need immediate care.
Matrix decomposition and multiplication on the Cerebras Wafer-Scale Engine (WSE) architecture
A tool keep tabs on your Cerebras Code usage limits, in real time
Conversation AI model for open domain dialogs
image generator with React.js, TypeScript, Supabase, Together AI, and Cerebras API
AI-powered VS Code extension for intelligent code chat and analysis using Cerebras AI models
This evaluation explores the In-context learning (ICL) capabilities of pre-trained language models on arithmetic tasks and sentiment analysis using synthetic datasets. The goal is to use different prompting strategies—zero-shot, few-shot, and chain-of-thought—to assess the performance of these models on the given tasks.
Add a description, image, and links to the cerebras topic page so that developers can more easily learn about it.
To associate your repository with the cerebras topic, visit your repo's landing page and select "manage topics."