bnew

Veteran
Joined
Nov 1, 2015
Messages
55,695
Reputation
8,224
Daps
157,189



CONVERSATIONAL AI​

NLP​

Llama Guard: LLM-based Input-Output Safeguard for Human-AI Conversations​

December 07, 2023



Abstract​

We introduce Llama Guard, an LLM-based input-output safeguard model geared towards Human-AI conversation use cases. Our model incorporates a safety risk taxonomy, a valuable tool for categorizing a specific set of safety risks found in LLM prompts (i.e., prompt classification). This taxonomy is also instrumental in classifying the responses generated by LLMs to these prompts, a process we refer to as response classification. For the purpose of both prompt and response classification, we have meticulously gathered a dataset of high quality. Llama Guard, a Llama2-7b model that is instruction-tuned on our collected dataset, albeit low in volume, demonstrates strong performance on existing benchmarks such as the OpenAI Moderation Evaluation dataset and ToxicChat, where its performance matches or exceeds that of currently available content moderation tools. Llama Guard functions as a language model, carrying out multi-class classification and generating binary decision scores. Furthermore, the instruction fine-tuning of Llama Guard allows for the customization of tasks and the adaptation of output formats. This feature enhances the model's capabilities, such as enabling the adjustment of taxonomy categories to align with specific use cases, and facilitating zero-shot or few-shot prompting with diverse taxonomies at the input. We are making Llama Guard model weights available and we encourage researchers to further develop and adapt them to meet the evolving needs of the community for AI safety.















🤗 Models on Hugging Face | Blog | Website | CyberSec Eval Paper | Llama Guard Paper






Purple Llama​

Purple Llama is an umbrella project that over time will bring together tools and evals to help the community build responsibly with open generative AI models. The initial release will include tools and evals for Cyber Security and Input/Output safeguards but we plan to contribute more in the near future.



Why purple?​

Borrowing a concept from the cybersecurity world, we believe that to truly mitigate the challenges which generative AI presents, we need to take both attack (red team) and defensive (blue team) postures. Purple teaming, composed of both red and blue team responsibilities, is a collaborative approach to evaluating and mitigating potential risks and the same ethos applies to generative AI and hence our investment in Purple Llama will be comprehensive.



License​

Components within the Purple Llama project will be licensed permissively enabling both research and commercial usage. We believe this is a major step towards enabling community collaboration and standardizing the development and usage of trust and safety tools for generative AI development. More concretely evals and benchmarks are licensed under the MIT license while any models use the Llama 2 Community license. See the table below:





Component Type​
Components​
License​
Evals/Benchmarks​
Cyber Security Eval (others to come)​
MIT​
Models​
Llama Guard​


Evals & Benchmarks​



Cybersecurity​

We are sharing what we believe is the first industry-wide set of cybersecurity safety evaluations for LLMs. These benchmarks are based on industry guidance and standards (e.g., CWE and MITRE ATT&CK) and built in collaboration with our security subject matter experts. With this initial release, we aim to provide tools that will help address some risks outlined in the White House commitments on developing responsible AI, including:



Metrics for quantifying LLM cybersecurity risks. Tools to evaluate the frequency of insecure code suggestions. Tools to evaluate LLMs to make it harder to generate malicious code or aid in carrying out cyberattacks. We believe these tools will reduce the frequency of LLMs suggesting insecure AI-generated code and reduce their helpfulness to cyber adversaries. Our initial results show that there are meaningful cybersecurity risks for LLMs, both with recommending insecure code and for complying with malicious requests. See our Cybersec Eval paper for more details.



You can also check out the 🤗 leaderboard here.



Input/Output Safeguards​

As we outlined in Llama 2’s Responsible Use Guide, we recommend that all inputs and outputs to the LLM be checked and filtered in accordance with content guidelines appropriate to the application.





Llama Guard​

To support this, and empower the community, we are releasing Llama Guard, an openly-available model that performs competitively on common open benchmarks and provides developers with a pretrained model to help defend against generating potentially risky outputs.



As part of our ongoing commitment to open and transparent science, we are releasing our methodology and an extended discussion of model performance in our Llama Guard paper. This model has been trained on a mix of publicly-available datasets to enable detection of common types of potentially risky or violating content that may be relevant to a number of developer use cases. Ultimately, our vision is to enable developers to customize this model to support relevant use cases and to make it easier to adopt best practices and improve the open ecosystem.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,695
Reputation
8,224
Daps
157,189

Extract code from a screenshot​

62729cdf5b6d998e95b70b7f_thumbs_up.png

Upload your screenshots with code
62729da1a38f7ffc9b73758d_right_arrow.png

Extract it
62729da1a38f7ffc9b73758d_right_arrow.png

62729cdf5701e57f0d9fa525_secure.png

Copy to clipboard or save to Pieces
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,695
Reputation
8,224
Daps
157,189

Voice Assistant for the Web​

A smart voice assistant optimized for low latency responses. Uses Vercel Edge Functions, Whisper Speech Recognition, GPT-3.5 Turbo and Eleven Labs TTS streaming.

View Demo · Report Bug · Request Feature


Logo

Features​

✅ A Siri-like voice assistant within your browser

✅ Optimized for low latency responses

✅ With the combined power of OpenAI, Whisper Speech Recognition and Eleven Labs

Demo​

You can test the voice assistant here: https://heyassistant.co

Motivation​

Voice Assistants have become an integral part of our lives. They are everywhere. In our phones, in our cars, in our homes. Why not also on the web?

Until recently the main problem with voice assistants on the web was the latency. It took too long to send the audio to the server, generate an LLM completion and send speech back. The recent advances of OpenAI, Eleven Labs and Vercel have made it possible to build a voice assistant that is fast enough to be used on the web.

I would love to for this repo to become the go-to place for people who want to build their own voice assistant. I've been working on this project for a while now and I'm really excited to share it with you.


Thoughts on latency and user experience​

The latency of the voice assistant is the most important factor for a good user experience. Currently there are 3 main factors that contribute to the latency:


  • The time it takes to transcribe the audio (Via Whisper Speech Recognition)
  • The time it takes to generate the response (Via GPT-3.5 Turbo)
  • The time it takes to stream the speech response (Via Eleven Labs TTS)

Based on some tests I've done, the speech generation takes the most time. The longer the text to be synthesized, the longer it takes to generate the speech. The latency of the speech generation is also the most unpredictable.

A possible mitigation strategy might be splitting the response into multiple parts and streaming them one after another. This would allow the user to start listening to the response while the rest of the response is being generated. I haven't implemented this yet, but it's something I'm considering. If you have any ideas on how to improve the latency, please let me know.

Another thing to keep in mind is perceived wait time. Based on some research, it seems that the perceived wait time is shorter if the user is given some kind of feedback while waiting. I've implemented a simple "thinking" notification that is shown while the assistant is processing the response, but I'm sure there are better ways to improve the perceived wait time.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,695
Reputation
8,224
Daps
157,189

How Nikon, Sony and Canon are fighting deepfakes with new technology​


Nikon, Sony Group, and Canon are working on embedding digital signatures in their cameras, which will act as proof of origin and integrity for the images.​

Rizwan Choudhury

Published: Dec 31, 2023 08:28 AM EST
INNOVATION



Photo equipment shop in Kowloon, Hong Kong.
Photo equipment shop in Kowloon, Hong Kong.
georgeclerk/iStock



As fake images become more convincing and widespread, camera makers are fighting back with new technology that can verify the authenticity of photos. Nikon, Sony Group, and Canon are working on embedding digital signatures in their cameras, which will act as proof of origin and integrity for the images.

As Nikkei Assia reports, digital signatures will contain information such as the date, time, location, and photographer of the image and will be resistant to tampering. This will help photojournalists and other professionals who need to ensure the credibility of their work. Nikon will offer this feature in its mirrorless cameras, while Sony and Canon will also incorporate it in their professional-grade mirrorless SLR cameras.

SEE ALSO
RELATED


Verify: A global standard for digital signatures​

The three camera giants have agreed on a global standard for digital signatures, which will make them compatible with a web-based tool called Verify. This tool, launched by an alliance of global news organizations, technology companies, and camera makers, will allow anyone to check the credentials of an image for free. Verify will display the relevant information if an image has a digital signature. If artificial intelligence creates or alters an image, Verify will flag it as having "No Content Credentials."

The need for such technology is evident, as deepfakes of prominent figures like former US President Donald Trump and Japanese Prime Minister Fumio Kishida have gone viral this year, raising questions about the trustworthiness of online content. Moreover, China's Tsinghua University researchers have developed a new generative AI technology called a latent consistency model, which can produce about 700,000 images daily.

How Nikon, Sony and Canon are fighting deepfakes with new technology

Canon app lets users see how an image was altered over time.
Canon

How others are also joining the fight​

Other technology companies are also joining the battle against fake images. Google has released a tool that adds invisible digital watermarksto AI-generated pictures, which can be detected by another tool. Intel has developed technology that can analyze the skin color changes of subjects in images, which indicate the blood flow under their skin, and use that to determine the image's authenticity. Hitachi is working on technology to prevent online identity fraud by verifying user images.

The new camera technology is expected to be available by 2024. Sony will release it in the spring of 2024, and Canon will follow suit later that year. Sony is also considering adding the feature to videos, and Canon is developing a similar video technology. Canon also released an image management app to tell whether humans take images.

Sony will also promote the adoption of the technology among other media outlets and has already field-tested it with The Associated Press in October. Canon has partnered with Thomson Reuters and the Starling Lab for Data Integrity, an institute co-founded by Stanford University and the University of Southern California, to develop the technology.

The camera makers hope their technology will help restore the trust and confidence in the images that shape our perception of the world.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,695
Reputation
8,224
Daps
157,189

About​

Run Mixtral-8x7B models in Colab or consumer desktops

Mixtral offloading​

This project implements efficient inference of Mixtral-8x7B models.

How does it work?​

In summary, we achieve efficient inference of Mixtral-8x7B models through a combination of techniques:

  • Mixed quantization with HQQ. We apply separate quantization schemes for attention layers and experts to fit the model into the combined GPU and CPU memory.
  • MoE offloading strategy. Each expert per layer is offloaded separately and only brought pack to GPU when needed. We store active experts in a LRU cache to reduce GPU-RAM communication when computing activations for adjacent tokens.
For more detailed information about our methods and results, please refer to our tech-report.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,695
Reputation
8,224
Daps
157,189





Nous-Hermes-2-SOLAR-10.7B
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,695
Reputation
8,224
Daps
157,189

 
Top