bnew

Veteran
Joined
Nov 1, 2015
Messages
51,755
Reputation
7,916
Daps
148,570

Chat with AI Characters Offline.​


Runs locally. Zero-configuration.

Supports Llama 2 & GPU Acceleration

Download Desktop App


Zero Configuration​

Start chatting right away with a one-click Desktop installer that "just works" out of the box (GPU & Metal acceleration included!). No coding knowledge needed.

Own Your Data​

The AI models that power Faraday are stored 100% locally on your computer. No one can change the behavior of your Characters, revoke access, or remove your data.

Offline-First​

Faraday is a desktop app that works offline. Enjoy it on the plane, train, or anywhere else where you don't have internet.

Private​

Chat data is saved to your computer and is never sent to a remote server. You can opt-in to share specific chats to help us improve models, but it is never required.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,755
Reputation
7,916
Daps
148,570

a16z-infra/llama13b-v2-chat​

public A 13 billion parameter language model from Meta, fine tuned for chat completions

Readme

This is LLaMA v2, the latest open-source language model from Meta. Compared to LLaMA v1, it was trained on more data (~2 trillion tokens) and supports context windows up to 4k tokens. This endpoint uses the 13 billion parameter model released in open source by Meta.

 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,755
Reputation
7,916
Daps
148,570
@Macallik86

h2ogpt has Llama 2 up on their site already.

h2oGPT [meta-llama/Llama-2-7b-chat-hf]
convert the curl header parameters to python requests headers code
-H 'authority: www.example.com'
-H 'accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,/;q=0.8,application/signed-exchange;v=b3;q=0.9'
-H 'accept-language: en-US,en;q=0.9'
-H 'cache-control: no-cache'
-H 'cookie: auth.strategy=local; i18n_redirected=en'
-H 'pragma: no-cache'
-H 'sec-ch-ua: "Not?A_Brand";v="8", "Chromium";v="108"'
-H 'sec-ch-ua-mobile: ?0'
-H 'sec-ch-ua-platform: "Windows"'
-H 'sec-fetch-dest: document'
-H 'sec-fetch-mode: navigate'
-H 'sec-fetch-site: same-origin'
-H 'sec-fetch-user: ?1'
-H 'upgrade-insecure-requests: 1'
-H 'user-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36' \


Sure! Here are the equivalent Python requests headers for the curl command you provided:

Code:
headers = {
    'Authorization': 'auth.strategy=local',
    'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9',
    'Accept-Language': 'en-US,en;q=0.9',
    'Cache-Control': 'no-cache',
    'Cookie': 'auth.strategy=local; i18n_redirected=en',
    'Pragma': 'no-cache',
    'Sec-CH-UA': '"Not?A_Brand";v="8", "Chromium";v="108"',
    'Sec-CH-UA-Mobile': '?0',
    'Sec-CH-UA-Platform': 'Windows',
    'Sec-Fetch-Dest': 'document',
    'Sec-Fetch-Mode': 'navigate',
    'Sec-Fetch-Site': 'same-origin',
    'Sec-Fetch-User': '?1',
    'Upgrade-Insecure-Requests': '1',
    'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36'
}
response = requests.get('Example Domain', headers=headers)
print(response.headers)

Please note that some of the headers in the curl command have been combined or modified in the Python requests code to match the syntax and values expected by the library. Additionally, some headers have been added or modified to include the correct values for the requests library.

h2oGPT [meta-llama/Llama-2-13b-chat-hf]

convert the curl header parameters to python requests headers code
-H 'authority: www.example.com'
-H 'accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,/;q=0.8,application/signed-exchange;v=b3;q=0.9'
-H 'accept-language: en-US,en;q=0.9'
-H 'cache-control: no-cache'
-H 'cookie: auth.strategy=local; i18n_redirected=en'
-H 'pragma: no-cache'
-H 'sec-ch-ua: "Not?A_Brand";v="8", "Chromium";v="108"'
-H 'sec-ch-ua-mobile: ?0'
-H 'sec-ch-ua-platform: "Windows"'
-H 'sec-fetch-dest: document'
-H 'sec-fetch-mode: navigate'
-H 'sec-fetch-site: same-origin'
-H 'sec-fetch-user: ?1'
-H 'upgrade-insecure-requests: 1'
-H 'user-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36' \


Sure! Here are the Python requests headers converted from the curl command you provided:
Code:
import requests

headers = {
'Authorization': 'www.example.com,
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9',
'Accept-Language': 'en-US,en;q=0.9',
'Cache-Control': 'no-cache',
'Cookie': 'auth.strategy=local; i18n_redirected=en',
'Pragma': 'no-cache',
'Sec-Ch-Ua': '"Not?A_Brand";v="8", "Chromium";v="108"',
'Sec-Ch-Ua-Mobile': '?0',
'Sec-Ch-Ua-Platform': 'Windows',
'Sec-Fetch-Dest': 'document',
'Sec-Fetch-Mode': 'navigate',
'Sec-Fetch-Site': 'same-origin',
'Sec-Fetch-User': '?1',
'Upgrade-Insecure-Requests': '1',
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36'
}
response = requests.get('Example Domain', headers=headers)

Note that I had to remove the space before the = in the Accept header, as requests does not allow spaces in header names. Also, I had to use a dictionary to define the headers, as requests does not allow multiple headers with the same name.

h2oGPT [meta-llama/Llama-2-70b-chat-hf]

convert the curl header parameters to python requests headers code
-H 'authority: www.example.com'
-H 'accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,/;q=0.8,application/signed-exchange;v=b3;q=0.9'
-H 'accept-language: en-US,en;q=0.9'
-H 'cache-control: no-cache'
-H 'cookie: auth.strategy=local; i18n_redirected=en'
-H 'pragma: no-cache'
-H 'sec-ch-ua: "Not?A_Brand";v="8", "Chromium";v="108"'
-H 'sec-ch-ua-mobile: ?0'
-H 'sec-ch-ua-platform: "Windows"'
-H 'sec-fetch-dest: document'
-H 'sec-fetch-mode: navigate'
-H 'sec-fetch-site: same-origin'
-H 'sec-fetch-user: ?1'
-H 'upgrade-insecure-requests: 1'
-H 'user-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36' \


Sure! Here's the equivalent Python requests code for the given cURL command:
Code:
import requests

headers = {
'authority': 'www.example.com',
'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9',
'accept-language': 'en-US,en;q=0.9',
'cache-control': 'no-cache',
'cookie': 'auth.strategy=local; i18n_redirected=en',
'pragma': 'no-cache',
'sec-ch-ua': '"Not?A_Brand";v="8", "Chromium";v="108"',
'sec-ch-ua-mobile': '?0',
'sec-ch-ua-platform': '"Windows"',
'sec-fetch-dest': 'document',
'sec-fetch-mode': 'navigate',
'sec-fetch-site': 'same-origin',
'sec-fetch-user': '?1',
'upgrade-insecure-requests': '1',
'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36'
}

response = requests.get('Example Domain', headers=headers)

Note that I've removed the -H prefix from the header names, as they are not needed in Python requests. Also, the ' characters around the values of some headers were unnecessary, so I've removed them for clarity.

70B was the only one not to replace 'Authority' with 'Authorization' :ld:
 
Last edited:

Macallik86

Superstar
Supporter
Joined
Dec 4, 2016
Messages
6,392
Reputation
1,367
Daps
20,824
@Macallik86

h2ogpt has Llama 2 up on their site already.

h2oGPT [meta-llama/Llama-2-7b-chat-hf]
convert the curl header parameters to python requests headers code
-H 'authority: www.example.com'
-H 'accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,/;q=0.8,application/signed-exchange;v=b3;q=0.9'
-H 'accept-language: en-US,en;q=0.9'
-H 'cache-control: no-cache'
-H 'cookie: auth.strategy=local; i18n_redirected=en'
-H 'pragma: no-cache'
-H 'sec-ch-ua: "Not?A_Brand";v="8", "Chromium";v="108"'
-H 'sec-ch-ua-mobile: ?0'
-H 'sec-ch-ua-platform: "Windows"'
-H 'sec-fetch-dest: document'
-H 'sec-fetch-mode: navigate'
-H 'sec-fetch-site: same-origin'
-H 'sec-fetch-user: ?1'
-H 'upgrade-insecure-requests: 1'
-H 'user-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36' \


Sure! Here are the equivalent Python requests headers for the curl command you provided:

Code:
headers = {
    'Authorization': 'auth.strategy=local',
    'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9',
    'Accept-Language': 'en-US,en;q=0.9',
    'Cache-Control': 'no-cache',
    'Cookie': 'auth.strategy=local; i18n_redirected=en',
    'Pragma': 'no-cache',
    'Sec-CH-UA': '"Not?A_Brand";v="8", "Chromium";v="108"',
    'Sec-CH-UA-Mobile': '?0',
    'Sec-CH-UA-Platform': 'Windows',
    'Sec-Fetch-Dest': 'document',
    'Sec-Fetch-Mode': 'navigate',
    'Sec-Fetch-Site': 'same-origin',
    'Sec-Fetch-User': '?1',
    'Upgrade-Insecure-Requests': '1',
    'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36'
}
response = requests.get('Example Domain', headers=headers)
print(response.headers)

Please note that some of the headers in the curl command have been combined or modified in the Python requests code to match the syntax and values expected by the library. Additionally, some headers have been added or modified to include the correct values for the requests library.

h2oGPT [meta-llama/Llama-2-13b-chat-hf]

convert the curl header parameters to python requests headers code
-H 'authority: www.example.com'
-H 'accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,/;q=0.8,application/signed-exchange;v=b3;q=0.9'
-H 'accept-language: en-US,en;q=0.9'
-H 'cache-control: no-cache'
-H 'cookie: auth.strategy=local; i18n_redirected=en'
-H 'pragma: no-cache'
-H 'sec-ch-ua: "Not?A_Brand";v="8", "Chromium";v="108"'
-H 'sec-ch-ua-mobile: ?0'
-H 'sec-ch-ua-platform: "Windows"'
-H 'sec-fetch-dest: document'
-H 'sec-fetch-mode: navigate'
-H 'sec-fetch-site: same-origin'
-H 'sec-fetch-user: ?1'
-H 'upgrade-insecure-requests: 1'
-H 'user-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36' \


Sure! Here are the Python requests headers converted from the curl command you provided:
Code:
import requests

headers = {
'Authorization': 'www.example.com,
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9',
'Accept-Language': 'en-US,en;q=0.9',
'Cache-Control': 'no-cache',
'Cookie': 'auth.strategy=local; i18n_redirected=en',
'Pragma': 'no-cache',
'Sec-Ch-Ua': '"Not?A_Brand";v="8", "Chromium";v="108"',
'Sec-Ch-Ua-Mobile': '?0',
'Sec-Ch-Ua-Platform': 'Windows',
'Sec-Fetch-Dest': 'document',
'Sec-Fetch-Mode': 'navigate',
'Sec-Fetch-Site': 'same-origin',
'Sec-Fetch-User': '?1',
'Upgrade-Insecure-Requests': '1',
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36'
}
response = requests.get('Example Domain', headers=headers)

Note that I had to remove the space before the = in the Accept header, as requests does not allow spaces in header names. Also, I had to use a dictionary to define the headers, as requests does not allow multiple headers with the same name.

h2oGPT [meta-llama/Llama-2-70b-chat-hf]

convert the curl header parameters to python requests headers code
-H 'authority: www.example.com'
-H 'accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,/;q=0.8,application/signed-exchange;v=b3;q=0.9'
-H 'accept-language: en-US,en;q=0.9'
-H 'cache-control: no-cache'
-H 'cookie: auth.strategy=local; i18n_redirected=en'
-H 'pragma: no-cache'
-H 'sec-ch-ua: "Not?A_Brand";v="8", "Chromium";v="108"'
-H 'sec-ch-ua-mobile: ?0'
-H 'sec-ch-ua-platform: "Windows"'
-H 'sec-fetch-dest: document'
-H 'sec-fetch-mode: navigate'
-H 'sec-fetch-site: same-origin'
-H 'sec-fetch-user: ?1'
-H 'upgrade-insecure-requests: 1'
-H 'user-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36' \


Sure! Here's the equivalent Python requests code for the given cURL command:
Code:
import requests

headers = {
'authority': 'www.example.com',
'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9',
'accept-language': 'en-US,en;q=0.9',
'cache-control': 'no-cache',
'cookie': 'auth.strategy=local; i18n_redirected=en',
'pragma': 'no-cache',
'sec-ch-ua': '"Not?A_Brand";v="8", "Chromium";v="108"',
'sec-ch-ua-mobile': '?0',
'sec-ch-ua-platform': '"Windows"',
'sec-fetch-dest': 'document',
'sec-fetch-mode': 'navigate',
'sec-fetch-site': 'same-origin',
'sec-fetch-user': '?1',
'upgrade-insecure-requests': '1',
'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36'
}

response = requests.get('Example Domain', headers=headers)

Note that I've removed the -H prefix from the header names, as they are not needed in Python requests. Also, the ' characters around the values of some headers were unnecessary, so I've removed them for clarity.

70B was the only one not to replace 'Authority' with 'Authorization' :ld:
How do you access Llama2 (and future new releases) from h2oGPT w/ the total UI like here?

Falcon is doing the job for now, but I don't even see Llama2 as an option
i4DOQJx.png
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,755
Reputation
7,916
Daps
148,570

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,755
Reputation
7,916
Daps
148,570

Announcing Nous Hermes Llama 2 13b!

This new version of Hermes, trained on Llama 2, has 4k context, and beats the benchmarks of original Hermes on every one we tested, including GPT4All benchmarks, BigBench, and AGIEval.

We have FP16, GGML, and GPTQ weights available!



fZIsPiJ.jpeg

78iqMyS.png

jPd91qA.png

NXnruoW.png
J1QTkBh.png

 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,755
Reputation
7,916
Daps
148,570

Meet FreeWilly, Our Large And Mighty Instruction Fine-Tuned Models​

21 Jul
Arcane_magic_with_an_orca_whale_swimming_in_the_galaxy_filled_with_vibrant_stars_steps-40_style-Pixel_Art_seed-0ts-1689963125_idx-0.png

Stability AI and its CarperAI lab are proud to announce FreeWilly1 and its successor FreeWilly2, two powerful new, open access, Large Language Models (LLMs). Both models demonstrate exceptional reasoning ability across varied benchmarks. FreeWilly1 leverages the original LLaMA 65B foundation model and was carefully fine-tuned with a new synthetically-generated dataset using Supervised Fine-Tune (SFT) in standard Alpaca format. Similarly, FreeWilly2 leverages the LLaMA 2 70B foundation model to reach a performance that compares favorably with GPT-3.5 for some tasks.

Both models are research experiments, and are released to foster open research under a non-commercial license. While we have conducted internal red-teaming to ensure the model remains polite and harmless, we welcome the community's feedback and help in further red-teaming.

Data Generation and Collection
The training for the FreeWilly models was directly inspired by the methodology pioneered by Microsoft in its paper: "Orca: Progressive Learning from Complex Explanation Traces of GPT-4.” While our data generation process is similar, we differ in our data sources.

Our variant of the dataset, containing 600,000 data points (roughly 10% of the dataset size the original Orca paper used), was created by prompting language models with high-quality instructions from the following datasets created by Enrico Shippole:
  1. COT Submix Original
  2. NIV2 Submix Original
  3. FLAN 2021 Submix Original
  4. T0 Submix Original
With this approach, we generated 500,000 examples with one simpler LLM model and an additional 100,000 with a more sophisticated LLM model. To ensure fair comparisons, we carefully filtered these datasets and removed examples that originated from evaluation benchmarks. Despite training on one-tenth the sample size of the original Orca paper (significantly reducing the cost and carbon footprint of training the model compared to the original paper), the resulting FreeWilly models demonstrate exceptional performance across various benchmarks – validating our approach to synthetically generated datasets.

Performance Evaluation
To internally evaluate these models, we used EleutherAI’s lm-eval-harness, to which we added AGIEval.

Both FreeWilly models excel in many areas, including intricate reasoning, understanding linguistic subtleties, and answering complex questions related to specialized domains, e.g. Law and mathematical problem-solving.

Open LLM Leaderboard benchmarks:
These FreeWilly results were evaluated by Stability AI researchers and independently reproduced by Hugging Face on July 21st, 2023, and published in their leaderboard.
Screenshot+2023-07-21+at+20.06.12.png



GPT4ALL benchmarks (all 0-shot):
Screenshot+2023-07-21+at+20.06.20.png



AGI Eval (all 0-shot):
Screenshot+2023-07-21+at+20.06.28.png



Contributing to an open future
FreeWilly1 and FreeWilly2 set a new standard in the field of open access Large Language Models. They both significantly advance research, enhance natural language understanding, and enable complex tasks. We are excited about the endless possibilities that these models will bring to the AI community, and the new applications they will inspire.

We would like to express our sincere gratitude to our passionate team of researchers, engineers, and collaborators, whose remarkable efforts and dedication have enabled us to reach this significant milestone.
Stay tuned for more exciting developments, and begin exploring the incredible potential of FreeWilly today!

1. The weights for FreeWilly2 are released as-is, while FreeWilly1’s are released as deltas over the original model. Both models are released under CC-BY-NC 4.0
2. These include the ARC-Challenge and others on the Open LLM Leaderboard, and GPT4ALL’s Performance Benchmarks
3. As reported in the “
GPT-4 Technical Report” from OpenAI (March 27th, 2023)
4. As reported in the paper “Orca: Progressive Learning from Complex Explanation Traces of GPT-4” from Microsoft Research (June 5th 2023)
 
Top