bnew

Veteran
Joined
Nov 1, 2015
Messages
56,200
Reputation
8,249
Daps
157,878




About​

Dump all your files and thoughts into your GenerativeAI brain and chat with it

Quivr is your second brain in the cloud, designed to easily store and retrieve unstructured information. It's like Obsidian but powered by generative AI.

Features​

  • Store Anything: Quivr can handle almost any type of data you throw at it. Text, images, code snippets, you name it.
  • Generative AI: Quivr uses advanced AI to help you generate and retrieve information.
  • Fast and Efficient: Designed with speed and efficiency in mind. Quivr makes sure you can access your data as quickly as possible.
  • Secure: Your data is stored securely in the cloud and is always under your control.
  • Compatible Files:
    • Text
    • Markdown
    • PDF
    • Audio
    • Video
  • Open Source: Quivr is open source and free to use.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,200
Reputation
8,249
Daps
157,878

https://web.archive.org/web/20230516163723/https://twitter.com/erhartford/status/1658336968936411136


Wizard Mega 13B​

Wizard Mega is a Llama 13B model fine-tuned on the ShareGPT, WizardLM, and Wizard-Vicuna datasets. These particular datasets have all been filtered to remove responses where the model responds with "As an AI language model...", etc or when the model refuses to respond.

Demo​

Try out the model in HF Spaces. The demo uses a quantized GGML version of the model to quickly return predictions on smaller GPUs (and even CPUs). Quantized GGML may have some minimal loss of model quality.

Release (Epoch Two)​

The Wizard Mega 13B SFT model is being released after two epochs as the eval loss increased during the 3rd (final planned epoch). Because of this, we have preliminarily decided to use the epoch 2 checkpoint as the final release candidate. wing-lian[/U]

Build​

Wizard Mega was built with Axolotl on 8xA100 80GB for 15 hours. The configuration to duplicate this build is provided in this repo's /config folder.

Bias, Risks, and Limitations​

Wizard Mega has not been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). Wizard Mega was fine-tuned from the base model LlaMa 13B, please refer to its model card's Limitations Section for relevant information.



------​





This repo contains 4bit GPTQ format quantised models of OpenAccess AI Collective's Wizard Mega 13B.

It is the result of quantising to 4bit using GPTQ-for-LLaMa.

Repositories available​

How to easily download and use this model in text-generation-webui​

Open the text-generation-webui UI as normal.

  1. Click the Model tab.
  2. Under Download custom model or LoRA, enter TheBloke/wizard-mega-13B-GPTQ.
  3. Click Download.
  4. Wait until it says it's finished downloading.
  5. Click the Refresh icon next to Model in the top left.
  6. In the Model drop-down: choose the model you just downloaded, wizard-mega-13B-GPTQ.
  7. If you see an error in the bottom right, ignore it - it's temporary.
  8. Fill out the GPTQ parameters on the right: Bits = 4, Groupsize = 128, model_type = Llama
  9. Click Save settings for this model in the top right.
  10. Click Reload the Model in the top right.
  11. Once it says it's loaded, click the Text Generation tab and enter a prompt!

Provided files​

wizard-mega-13B-GPTQ-4bit-128g.no-act-order.safetensors

This will work with all versions of GPTQ-for-LLaMa. It has maximum compatibility.

It was created without --act-order to ensure compatibility with all UIs out there.

  • wizard-mega-13B-GPTQ-4bit-128g.safetensors
    • Works with all versions of GPTQ-for-LLaMa code, both Triton and CUDA branches
    • Works with text-generation-webui one-click-installers
    • Parameters: Groupsize = 128. No act-order.
    • Command used to create the GPTQ:
      python llama.py /workspace/models/openaccess-ai-collective_wizard-mega-13b c4 --wbits 4 --true-sequential --groupsize 128 --save_safetensors /workspace/wizard-mega-13b/gptq/wizard-mega-13B-GPTQ-4bit-128g.no-act-order.safetensors

Original Wizard Mega 13B model card​

Wizard Mega is a Llama 13B model fine-tuned on the ShareGPT, WizardLM, and Wizard-Vicuna datasets. These particular datasets have all been filtered to remove responses where the model responds with "As an AI language model...", etc or when the model refuses to respond.

Release (Epoch Two)​

The Wizard Mega 13B SFT model is being released after two epochs as the eval loss increased during the 3rd (final planned epoch). Because of this, we have preliminarily decided to use the epoch 2 checkpoint as the final release candidate. wing-lian[/U]

 
Last edited:

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,200
Reputation
8,249
Daps
157,878

3 minute read May 16, 20231:50 PM EDTLast Updated an hour ago

OpenAI chief concerned about AI being used to compromise elections​


By Diane Bartz, Zeba Siddiqui and Jeffrey Dastin


JSUW6HZ6LBP2HF2HRHWXU3NM5I.jpg

OpenAI CEO Sam Altman testifies before a Senate Judiciary Privacy, Technology & the Law Subcommittee hearing titled 'Oversight of A.I.: Rules for Artificial Intelligence' on Capitol Hill in Washington, U.S., May 16, 2023. REUTERS/Elizabeth Frant


WASHINGTON, May 16 (Reuters) - The CEO of OpenAI, the startup behind ChatGPT, told a Senate panel on Tuesday the use of artificial intelligence to interfere with election integrity is a "significant area of concern", adding that it needs regulation.

"I am nervous about it," CEO Sam Altman said about elections and AI, adding rules and guidelines are needed.

For months, companies large and small have raced to bring increasingly versatile AI to market, throwing endless data and billions of dollars at the challenge. Some critics fear the technology will exacerbate societal harms, among them prejudice and misinformation, while others warn AI could end humanity itself.


"There's no way to put this genie in the bottle. Globally, this is exploding," said Senator Cory Booker, one of many lawmakers with questions about how best to regulate AI.

Senator Mazie Hirono noted the danger of misinformation as the 2024 election nears. "In the election context, for example, I saw a picture of former President Trump being arrested by NYPD and that went viral," she said, pressing Altman on whether he would consider the faked image harmful.

Altman responded that creators should make clear when an image is generated rather than factual.


Speaking before Congress for the first time, Altman suggested that, in general, the U.S. should consider licensing and testing requirements for development of AI models.

Altman, asked to opine on which AI should be subject to licensing, said a model that can persuade or manipulate a person's beliefs would be an example of a "great threshold."

He also said companies should have the right to say they do not want their data used for AI training, which is one idea being discussed on Capitol Hill. Altman said, however, that material on the public web would be fair game.

Altman also said he "wouldn't say never" to the idea of advertising but preferred a subscription-based model.

The White House has convened top technology CEOs including Altman to address AI. U.S. lawmakers likewise are seeking action to further the technology's benefits and national security while limiting its misuse. Consensus is far from certain.


An OpenAI staffer recently proposed the creation of a U.S. licensing agency for AI, which could be called the Office for AI Safety and Infrastructure Security, or OASIS, Reuters has reported.

OpenAI is backed by Microsoft Corp (MSFT.O). Altman is also calling for global cooperation on AI and incentives for safety compliance.

Christina Montgomery, International Business Machines Corp (IBM.N) chief privacy and trust officer, urged Congress to focus regulation on areas with the potential to do the greatest societal harm.

Reporting by Diane Bartz in Washington and Jeffrey Dastin in Palo Alto, California; Editing by Matthew Lewis and Edwina Gibbs
 
Top