bnew

Veteran
Joined
Nov 1, 2015
Messages
56,130
Reputation
8,239
Daps
157,834

lfhaleuok9ta1.png

v24evpuok9ta1.png

jyk9thuok9ta1.png

5qg1h2vyk9ta1.png
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,130
Reputation
8,239
Daps
157,834


StackLLaMA: A hands-on guide to train LLaMA with RLHF​

Published April 5, 2023

Edward Beeching
Kashif Rasul
Younes Belkada
Lewis Tunstall
Leandro von Werra
Nazneen Rajani
Nathan Lambert


Models such as ChatGPT, GPT-4, and Claude are powerful language models that have been fine-tuned using a method called Reinforcement Learning from Human Feedback (RLHF) to be better aligned with how we expect them to behave and would like to use them.

In this blog post, we show all the steps involved in training a LlaMa model to answer questions on Stack Exchange with RLHF through a combination of:

Supervised Fine-tuning (SFT)
Reward / preference modeling (RM)
Reinforcement Learning from Human Feedback (RLHF)
From InstructGPT paper: Ouyang, Long, et al. "Training language models to follow instructions with human feedback." arXiv preprint arXiv:2203.02155 (2022).

By combining these approaches, we are releasing the StackLLaMA model. This model is available on the 🤗 Hub (see Meta's LLaMA release for the original LLaMA model) and the entire training pipeline is available as part of the Hugging Face TRL library. To give you a taste of what the model can do, try out the demo below!



StackLLaMa is a 7 billion parameter language model based on Meta’s LLaMA model that has been trained on pairs of questions and answers from Stack Exchange using Reinforcement Learning from Human Feedback (RLHF) with the TRL library. For more details, check out our blog post.

Type in the box below and click the button to generate answers to your most pressing questions!

⚠️ Intended Use: this app and its supporting model are provided as educational tools to explain RLHF with the TRL library; not to serve as replacement for human expertise. For more details on the model’s limitations in terms of factuality and biases, see the model card.

⚠️ Data Collection: by default, we are collecting the prompts entered in this app to further improve and evaluate the model. Do not share any personal or sensitive information while using the app! You can opt out of this data collection by removing the checkbox below:

{continue reading post on site..}
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,130
Reputation
8,239
Daps
157,834







https://archive.is/ujlbv

Demo, data, and code to train open-source assistant-style large language model based on GPT-J and LLaMa

📗 Technical Report 2: GPT4All-J

📗 Technical Report 1: GPT4All

🐍 Official Python Bindings

💻 Official Typescript Bindings

💬 Official Web Chat Interface

💬 Official Chat Interface

🦜🔗 Official Langchain Backend

Discord

GPT4All is made possible by our compute partner Paperspace.​

GPT4All-J: An Apache-2 Licensed GPT4All Model​

gpt4all-j-demo

Run on an M1 Mac (not sped up!)

GPT4All-J Chat UI Installers​

Installs a native chat-client with auto-update functionality that runs on your desktop with the GPT4All-J model baked into it.

Mac/OSX

Windows

Ubuntu

These files are not yet cert signed by Windows/Apple so you will see security warnings on initial installation. We did not want to delay release while waiting for their process to complete.

Find the most up-to-date information on the GPT4All Website

Raw Model​

ggml Model Download Link

Note this model is only compatible with the C++ bindings found here. It will not work with any existing llama.cpp bindings as we had to do a large fork of llama.cpp. GPT4All will support the ecosystem around this new C++ backend going forward.

Python bindings are imminent and will be integrated into this repository. Stay tuned on the GPT4All discord for updates.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,130
Reputation
8,239
Daps
157,834

About​

Inpaint anything using Segment Anything and inpainting models.

Inpaint Anything: Segment Anything Meets Image Inpainting​

  • Authors: Tao Yu, Runseng Feng, Ruoyu Feng, Jinming Liu, Xin Jin, Wenjun Zeng and Zhibo Chen.
  • Institutes: University of Science and Technology of China; Eastern Institute for Advanced Study.
  • Paper: arXiv (releasing)

TL; DR: Users can select any object in an image by clicking on it. With powerful vision models, e.g., SAM, LaMa and Stable Diffusion (SD), Inpaint Anything is able to remove the object smoothly (i.e., Remove Anything). Further, prompted by user input text, Inpaint Anything can fill the object with any desired content (i.e., Fill Anything) or replace the background of it arbitrarily (i.e., Replace Anything).

🌟 Inpaint Anything Features​

  • Remove Anything
  • Fill Anything
  • Replace Anything

💡 Highlights​

🔥 Remove Anything​

Click on an object in the image, and Inpainting Anything will remove it instantly!

Demo​

🔥 Fill Anything​

Text prompt: "a teddy bear on a bench"
 
Last edited:

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,130
Reputation
8,239
Daps
157,834




FtyCY0CWwAAiNnD

demo site:
Sign Up - Open Assistant


OpenAssistant Conversations Dataset (OASST1)​

Dataset Summary​

In an effort to democratize research on large-scale alignment, we release OpenAssistant Conversations (OASST1), a human-generated, human-annotated assistant-style conversation corpus consisting of 161,443 messages distributed across 66,497 conversation trees, in 35 different languages, annotated with 461,292 quality ratings. The corpus is a product of a worldwide crowd-sourcing effort involving over 13,500 volunteers.

Please refer to our paper for further details.



 
Last edited:

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,130
Reputation
8,239
Daps
157,834

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,130
Reputation
8,239
Daps
157,834




This project brings large-language model and LLM-based chatbot to web browsers. Everything runs inside the browser with no server support and accelerated with WebGPU. This opens up a lot of fun opportunities to build AI assistants for everyone and enable privacy while enjoying GPU acceleration. Please check out our GitHub repo to see how we did it. There is also a demo which you can try out.

 
Top