bnew

Veteran
Joined
Nov 1, 2015
Messages
55,696
Reputation
8,224
Daps
157,231



1/3
Making an AI Genie that checks what I'm doing, he's roasting me hard

2/3
It uses Nous-Hermes-2-Mixtral from
@NousResearch (uncensored ) also rendering the video with AniPortrait (AniPortrait Official - a Hugging Face Space by ZJYang)

3/3
It just send the current website I'm using, no vision at the moment.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,696
Reputation
8,224
Daps
157,231

1/1
Here is a snippet of the original news in Chinese :-)
GKVcby3WoAAgo3z.jpg
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,696
Reputation
8,224
Daps
157,231


1/2
We just released a first version of StreamingT2V for long video generation with naive T2V/I2V support at: GitHub - Picsart-AI-Research/StreamingT2V: StreamingT2V: Consistent, Dynamic, and Extendable Long Video Generation from Text

Thanks to the open-source community for sharing their models &
@_akhaliq
@huggingface
for generous demo GPU grant! Have fun: StreamingT2V - a Hugging Face Space by PAIR

2/2
Check out these two T2V examples from StreamingT2V using a year-old base model: motion blur hyperlapses and multi-angle split-screen videos! It's only the beginning, but the potential is captivating!
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,696
Reputation
8,224
Daps
157,231


1/2
Introducing 𝐅𝐚𝐜𝐞-𝐭𝐨-𝐀𝐥𝐥: a powerful diffusers workflow that lets you customize your face with any style LoRA!

Inspired from the Face-to-Many ComfyUI workflow

Usage: Input a face, Choose a style LoRA, and Get a stunning stylized portrait

2/2
Want to try Face-to-All for yourself? Check out the live Gradio demo on
@huggingface Spaces!
Demo & video by
@multimodalart : Face to All - a Hugging Face Space by multimodalart

Create incredible stylized portraits in just few clicks, or Build your own such apps with us
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,696
Reputation
8,224
Daps
157,231

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,696
Reputation
8,224
Daps
157,231



1/2
𝐌𝐢𝐧𝐢𝐆𝐏𝐓𝟒-𝐕𝐢𝐝𝐞𝐨 Gradio demo is now available on the @huggingface
Spaces

A Multimodal LLM for Video Understanding

Effectively answer queries.


2/2
Demo by
@fffiloni MiniGPT4 Video - a Hugging Face Space by fffiloni

GitHub[/URL] Project: GitHub - Vision-CAIR/MiniGPT4-video

Dive into the code and demo and start building your own applications with Gradio and MiniGPT4-Video today!

 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,696
Reputation
8,224
Daps
157,231



1/2
Updates of 𝕎𝕚𝕝𝕕𝕍𝕚𝕤𝕚𝕠𝕟-𝔸𝕣𝕖𝕟𝕒: We added more models such as
@AnthropicAI 's Claude3 and
@RekaAILabs
! Also, many new features for improving user experience and collecting better evaluation data. E.g., we support selecting models for sampling and inputting reasons for votes. Wanna see GPT-4V vs Claude 3 Opus in vision-language tasks? Use our demo on
@huggingface
!

Link: Vision Arena (Testing VLMs side-by-side) - a Hugging Face Space by WildVision

Kudos to
@yujielu_10
and
@DongfuJiang
for their hard work!

Previously on X:

2/2
Done! thanks
GKatZgybwAABZ4-.jpg
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,696
Reputation
8,224
Daps
157,231


1/2
𝐆𝐆𝐔𝐅 𝐌𝐲 𝐑𝐞𝐩𝐨 is an app that allows you to create quantum GGUF models online in a fast & secure manner

App: GGUF My Repo - a Hugging Face Space by ggml-org

Tweet shows an example to create a quantum 8-bit model of Gemma 2B, in around 60 secs. Model uploads on HF and can be use with llama.cpp

2/2
Video credit:
@ggerganov
App Credit:
@reach_vb
,
@pcuenq
, and the team
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,696
Reputation
8,224
Daps
157,231



1/4
MiniGPT4-Video
@gradio demo has landed to
@huggingface


2/4
Learn more about the model via this
@gradio thread:

3/4
𝐌𝐢𝐧𝐢𝐆𝐏𝐓𝟒-𝐕𝐢𝐝𝐞𝐨: A Multimodal LLM for Video understanding

Building upon MiniGPT-v2, MiniGPT4-Video processes both temporal visual and textual data

Best part: It incorporates textual conversations alongside visual content to effectively answer queries.

4/4
Majestic example
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,696
Reputation
8,224
Daps
157,231







1/7
Have you met this problem before? 14B is not capable enough, but 72B is too large?

A 30B model might be the sweet spot, and now we finally have it, the new member of Qwen1.5 series, Qwen1.5-32B!

Blog: Qwen1.5-32B: Fitting the Capstone of the Qwen1.5 Language Model Series

HF: Qwen (Qwen) , search repos with “Qwen1.5-32B” in model names.

GitHub: GitHub - QwenLM/Qwen1.5: Qwen1.5 is the improved version of Qwen, the large language model series developed by Qwen team, Alibaba Cloud.

Demo: Qwen1.5 32B Chat - a Hugging Face Space by Qwen

No matter in the evaluation of base model or chat models, Qwen1.5-32B demonstrates competitive performance against the 72B model, just slightly falling behind.

Additionally it has GQA this time!!!

The models’ APIs are on Alibaba Cloud DashScope and Together AI, and you can also play with it in Chat Arena! Furthermore, you can also enjoy the GGUF in Ollama and LMStudio! It is also supported by MLX so you can also play it on your mac as well!

Hope you enjoy the new model and we’ll see if it can help with your finetuning and application!

2/7
To finetune the model, you can use Axolotl
@winglian or Llama Factory
@llamafactory_ai
as usual. Wing has tested it out and everything works perfectly!

3/7
Our users have told me that he preferred 32B even more than 72B! Another user just replaced the 72B with 32B in his RAG system.

4/7
Later we can provide more statistics about inference performance. I believe this time with GQA (8 heads), the model serving performance will be satisfactory!

5/7
For deployment, vLLM, SGLang are recommended as always! For quantization, we have AutoGPTQ and AutoAWQ!

6/7
@skypilot_org Skypilot has already supported Qwen1.5. You can deploy the 32B in the same way as 72B!

7/7
Hacker News now it's on hacker news frontpage!
GKaG_oybIAAXWzR.jpg
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,696
Reputation
8,224
Daps
157,231

Japanese researchers say they used AI to try and translate the noises of clucking chickens and learn whether they're excited, hungry, or scared​

Beatrice Nolan

Sep 21, 2023, 7:41 AM EDT

The man scared about 1,100 chickens to death using a flashlight, which he shined on the birds to make them panic. This caused the chickens to create a crush within their enclosure in their attempt to escape.

The study, which was led by University of Tokyo professor Adrian David Cheok, has yet to be peer-reviewed. Getty Images


  • Researchers think they've found a way to use AI to translate the clucks of chickens.
  • The Japanese researchers said their AI system could help understand chickens' emotional state.
  • The study has not been peer-reviewed and the researchers acknowledged limitations to their methods.
Researchers in Japan said they'd developed an AI system that could understand the emotional state of chickens.

The study, which was led by University of Tokyo professor Adrian David Cheok, has yet to be peer-reviewed.

The AI system is based on a technique the researchers called "Deep Emotional Analysis Learning," which can adapt to changing vocal patterns.

The study found that the system was capable of translating "various emotional states in chickens, including hunger, fear, anger, contentment, excitement, and distress."

The study said: "Our methodology employs a cutting-edge AI technique we call Deep Emotional Analysis Learning (DEAL), a highly mathematical and innovative approach that allows for the nuanced understanding of emotional states through auditory data."

"If we know what animals are feeling, we can design a much better world for them," Cheok told the New York Post. Cheok did not immediately respond to Insider's request for comment, made outside normal working hours.

The researchers tested the system on 80 chickens for the study and collaborated with a team of animal psychologists and veterinarians.

The system was able to achieve surprisingly high accuracy in identifying the birds' emotional states, the study found. "The high average probabilities of detection for each emotion suggest that our model has learned to capture meaningful patterns and features from the chicken sounds," it said.

The researchers acknowledged potential limitations, including variations in breeds and the complexity of some communications, such as body language.

Scientists and researchers are also using AI tools for conservation efforts. In one case, AI tools have been implemented to help identify tracks to better understand animal populations.

In 2022, researchers led by the University of Copenhagen, the ETH Zurich, and France's National Research Institute for Agriculture, Food and Environment said they'd created an algorithm to help understand the emotions of pigs.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,696
Reputation
8,224
Daps
157,231

Computer Science > Computation and Language​

[Submitted on 26 Mar 2024]

COIG-CQIA: Quality is All You Need for Chinese Instruction Fine-tuning​

Yuelin Bai, Xinrun Du, Yiming Liang, Yonggang Jin, Ziqiang Liu, Junting Zhou, Tianyu Zheng, Xincheng Zhang, Nuo Ma, Zekun Wang, Ruibin Yuan, Haihong Wu, Hongquan Lin, Wenhao Huang, Jiajun Zhang, Wenhu Chen, Chenghua Lin, Jie Fu, Min Yang, Shiwen Ni, Ge Zhang
Recently, there have been significant advancements in large language models (LLMs), particularly focused on the English language. These advancements have enabled these LLMs to understand and execute complex instructions with unprecedented accuracy and fluency. However, despite these advancements, there remains a noticeable gap in the development of Chinese instruction tuning. The unique linguistic features and cultural depth of the Chinese language pose challenges for instruction tuning tasks. Existing datasets are either derived from English-centric LLMs or are ill-suited for aligning with the interaction patterns of real-world Chinese users. To bridge this gap, we introduce COIG-CQIA, a high-quality Chinese instruction tuning dataset. Our aim is to build a diverse, wide-ranging instruction-tuning dataset to better align model behavior with human interactions. To this end, we collect a high-quality human-written corpus from various sources on the Chinese Internet, including Q&A communities, Wikis, examinations, and existing NLP datasets. This corpus was rigorously filtered and carefully processed to form the COIG-CQIA dataset. Furthermore, we train models of various scales on different subsets of CQIA, following in-depth evaluation and analyses. The findings from our experiments offer valuable insights for selecting and developing Chinese instruction-tuning datasets. We also find that models trained on CQIA-Subset achieve competitive results in human assessment as well as knowledge and security benchmarks. Data are available at this https URL
Subjects:Computation and Language (cs.CL); Artificial Intelligence (cs.AI)
Cite as:arXiv:2403.18058 [cs.CL]
(or arXiv:2403.18058v1 [cs.CL] for this version)
[2403.18058] COIG-CQIA: Quality is All You Need for Chinese Instruction Fine-tuning
Focus to learn more

Submission history

From: Yuelin Bai [view email]
[v1] Tue, 26 Mar 2024 19:24:18 UTC (7,301 KB)


 
Top