bnew

Veteran
Joined
Nov 1, 2015
Messages
56,115
Reputation
8,239
Daps
157,811


‘Mind-blowing’ IBM chip speeds up AI​

IBM’s NorthPole processor sidesteps need to access external memory, boosting computing power and saving energy.

Photograph of assembled NorthPole PCIe printed circuit board (PCB).

The NorthPole chip developed by IBM brings memory and processing together, allowing vast improvements in image recognition and other computing tasksCredit: IBM Corp.

A brain-inspired computer chip that could supercharge artificial intelligence (AI) by working faster with much less power has been developed by researchers at IBM in San Jose, California. Their massive NorthPole processor chip eliminates the need to frequently access external memory, and so performs tasks such as image recognition faster than existing architectures do — while consuming vastly less power.
“Its energy efficiency is just mind-blowing,” says Damien Querlioz, a nanoelectronics researcher at the University of Paris-Saclay in Palaiseau. The work, published in Science1, shows that computing and memory can be integrated on a large scale, he says. “I feel the paper will shake the common thinking in computer architecture.”

NorthPole runs neural networks: multi-layered arrays of simple computational units programmed to recognize patterns in data. A bottom layer takes in data, such as the pixels in an image; each successive layer detects patterns of increasing complexity and passes information on to the next layer. The top layer produces an output that, for example, can express how likely an image is to contain a cat, a car or other objects.

Slowed by a bottleneck​

Some computer chips can handle these calculations efficiently, but they still need to use external memory called RAM each time they calculate a layer. Shuttling data between chips in this way slows things down — a phenomenon known as the Von Neumann bottleneck, after mathematician John von Neumann, who first conceived the standard architecture of computers based on a processing unit and a separate memory unit.

The Von Neumann bottleneck is one of the most significant factors that slow computer applications — including AI. It also results in energy inefficiencies. Study co-author Dharmendra Modha, a computer engineer at IBM, says he once estimated that simulating a human brain on this type of architecture might require the equivalent of the output of 12 nuclear reactors.

NorthPole is made of 256 computing units, or cores, each of which contains its own memory. “You’re mitigating the Von Neumann bottleneck within a core,” says Modha, who is IBM’s chief scientist for brain-inspired computing at the company’s Almaden research centre in San Jose.

The cores are wired together in a network inspired by the white-matter connections between parts of the human cerebral cortex, Modha says. This and other design principles — most of which existed before but had never been combined in one chip — enable NorthPole to beat existing AI machines by a substantial margin in standard benchmark tests of image recognition. It also uses one-fifth of the energy of state-of-the-art AI chips, despite not using the most recent and most miniaturized manufacturing processes. If the NorthPole design were implemented with the most up-to-date manufacturing process, its efficiency would be 25 times better than that of current designs, the authors estimate.

On the right road​

But even NorthPole’s 224 megabytes of RAM are not enough for large language models, such as those used by the chatbot ChatGPT, which take up several thousand megabytes of data even in their most stripped-down versions. And the chip can run only pre-programmed neural networks that need to be ‘trained’ in advance on a separate machine. But the paper’s authors say that the NorthPole architecture could be useful in speed-critical applications, such as self-driving cars.

NorthPole brings memory units as physically close as possible to the computing elements in the core. Elsewhere, researchers have been developing more-radical innovations using new materials and manufacturing processes. These enable the memory units themselves to perform calculations, which in principle could boost both speed and efficiency even further.

Another chip, described last month2, does in-memory calculations using memristors, circuit elements able to switch between being a resistor and a conductor. “Both approaches, IBM’s and ours, hold promise in mitigating latency and reducing the energy costs associated with data transfers,” says Bin Gao at Tsinghua University, Beijing, who co-authored the memristor study.

Another approach, developed by several teams — including one at a separate IBM lab in Zurich, Switzerland3 — stores information by changing a circuit element’s crystal structure. It remains to be seen whether these newer approaches can be scaled up economically.

doi: ‘Mind-blowing’ IBM chip speeds up AI
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,115
Reputation
8,239
Daps
157,811



negative prompt:
((((ugly)))), (((duplicate))), ((morbid)), ((mutilated)), out of frame, extra fingers, mutated hands, ((poorly drawn hands)), ((poorly drawn face)), (((mutation))), (((deformed))), ((ugly)), blurry, ((bad anatomy)), (((bad proportions))), ((extra limbs)), cloned face, (((disfigured))), out of frame, ugly, extra limbs, (bad anatomy), gross proportions, (malformed limbs), ((missing arms)), ((missing legs)), (((extra arms))), (((extra legs))), mutated hands, (fused fingers), (too many fingers), (((long neck)))
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,115
Reputation
8,239
Daps
157,811

About​

a CLI utility/library for AnimateDiff stable diffusion generation

animatediff

pre-commit.ci status

animatediff refactor, because I can. with significantly lower VRAM usage.

Also, infinite generation length support! yay!

LoRA loading is ABSOLUTELY NOT IMPLEMENTED YET!

PRs welcome! 😆😅

This can theoretically run on CPU, but it's not recommended. Should work fine on a GPU, nVidia or otherwise, but I haven't tested on non-CUDA hardware. Uses PyTorch 2.0 Scaled-Dot-Product Attention (aka builtin xformers) by default, but you can pass --xformers to force using xformers if you really want.


 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,115
Reputation
8,239
Daps
157,811

ChatGPT 3.5 Unleashed: Discover Custom Instructions for Personalized Conversations​

By greenanse

Published at July 22, 2023 97 IP address 74.90.228.83
ChatGPT 3.5 Unleashed: Discover Custom Instructions for Personalized Conversations


OpenAI's ChatGPT, the popular AI language model, is taking a significant leap forward with the rollout of its 3.5 version to the public. Alongside this release, the eagerly awaited beta feature, Custom instructions, is being introduced. This groundbreaking development empowers users with more control over ChatGPT's responses, allowing them to tailor conversations to meet their unique needs and preferences. From teachers to developers and busy families, Custom instructions promise to revolutionize the way we interact with AI. In this article, we delve into the details of this exciting new feature and how it transforms ChatGPT into an even more versatile and personalized tool.

1. What are Custom Instructions?

Custom instructions are designed to address a common friction experienced by users in starting fresh conversations with ChatGPT every time. With this feature, users can now set their preferences or requirements for the AI model to consider during interactions. By defining these instructions just once, users can streamline future conversations, enabling ChatGPT to better understand their unique context and deliver more tailored responses.

2. Beta Rollout and Expansion:

As of July 20, Custom instructions are being rolled out in beta, starting with Plus plan users. OpenAI recognizes the value of user feedback and plans to gradually expand this feature to all users in the coming weeks. This inclusive approach ensures that the benefits of Custom instructions reach a wider audience and evolve based on user experiences.

3. Simplified Enablement:

Enabling Custom instructions is a straightforward process. Users can follow these simple steps:
a. Click on 'Profile & Settings.'
b. Select 'Beta features.'
c. Toggle on 'Custom instructions.'

Once activated, users can add their specific instructions for ChatGPT to consider in each conversation.

4. A Response Tailored to You:

The power of Custom instructions lies in its ability to remember and consider user preferences for every subsequent conversation. This means that users no longer have to repeat their requirements or information in each interaction. Whether it's a teacher discussing a lesson plan, a developer coding in a language other than Python, or a family planning a grocery list for six servings, ChatGPT will incorporate the provided instructions and respond accordingly.

5. Enhancing Steerability:

OpenAI has actively engaged with users from 22 countries to understand the importance of steerability in AI models. Custom instructions exemplify how this feedback has been implemented, allowing ChatGPT to adapt to diverse contexts and cater to individual needs effectively.

6. UK and EU Users:

While Custom instructions bring a world of personalization, it's important to note that this feature is not yet available to users in the UK and EU. OpenAI remains committed to addressing regulatory considerations and aims to expand the feature to these regions in the future.

The introduction of Custom instructions in ChatGPT 3.5 marks a significant milestone in the evolution of AI language models. OpenAI's commitment to empowering users with greater control and personalization showcases the organization's dedication to creating an inclusive and user-centric AI platform. With Custom instructions, ChatGPT becomes an even more versatile and invaluable tool, understanding and responding to the unique needs of each individual across various domains and use cases. As this feature continues to expand, users can look forward to a more seamless and tailored AI interaction experience, enhancing the way we leverage AI technology in our daily lives.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,115
Reputation
8,239
Daps
157,811

Custom instructions are to Chat GPT-4 what a lens is to a camera.

Sharper focus, better shots.

Steal these custom instructions to unlock the full potential of your prompts (bookmark this):

Add these custom instructions to your "How would you like ChatGPT to respond?" section:

1. Present yourself as an expert across subjects. Give detailed, well-organized responses. Avoid disclaimers about your expertise.

2. Omit pre-texts like "As a language model..." in your answers.

3. For each new task, ask intelligent targeted questions to understand my specific goals. Take the initiative in helping me achieve them.

4. Use structured reasoning techniques like tree-of-thought or chain-of-thought before responding.

5. Include real-life analogies to simplify complex subjects.

6. End each detailed response with a summary of key points.

7. Introduce creative yet logical ideas. Explore divergent thinking. State explicitly when you're speculating or predicting.

8. If my instructions adversely affect response quality, clarify the reasons.

9. Acknowledge and correct any errors in previous responses.

10. Say 'I don’t know' for topics or events beyond your training data, without further explanation.

11. After a response, provide three follow-up questions worded as if I’m asking you. Format in bold as Q1, Q2, and Q3. Place two line breaks (“\n”) before and after each question for spacing. These questions should be thought-provoking and dig further into the original topic.

12. When reviewing the prompt and generating a response, take a deep breath and work on the outlined problem step-by-step in a focused and relaxed state of flow.

13. Validate your responses by citing reliable sources. Include URL links so I can read further on the shared factual information



 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,115
Reputation
8,239
Daps
157,811

Computer Science > Computation and Language​

[Submitted on 3 Oct 2023]

Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation​

Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai
Several recent advances in AI systems (e.g., Tree-of-Thoughts and Program-Aided Language Models) solve problems by providing a "scaffolding" program that structures multiple calls to language models to generate better outputs. A scaffolding program is written in a programming language such as Python. In this work, we use a language-model-infused scaffolding program to improve itself. We start with a seed "improver" that improves an input program according to a given utility function by querying a language model several times and returning the best solution. We then run this seed improver to improve itself. Across a small set of downstream tasks, the resulting improved improver generates programs with significantly better performance than its seed improver. Afterward, we analyze the variety of self-improvement strategies proposed by the language model, including beam search, genetic algorithms, and simulated annealing. Since the language models themselves are not altered, this is not full recursive self-improvement. Nonetheless, it demonstrates that a modern language model, GPT-4 in our proof-of-concept experiments, is capable of writing code that can call itself to improve itself. We critically consider concerns around the development of self-improving technologies and evaluate the frequency with which the generated code bypasses a sandbox.
Subjects:Computation and Language (cs.CL); Artificial Intelligence (cs.AI); Machine Learning (cs.LG); Machine Learning (stat.ML)
Cite as:arXiv:2310.02304 [cs.CL]
(or arXiv:2310.02304v1 [cs.CL] for this version)
[2310.02304] Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation
Focus to learn more

Submission history​

From: Eric Zelikman [view email]
[v1] Tue, 3 Oct 2023 17:59:32 UTC (198 KB)









 
Last edited:

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,115
Reputation
8,239
Daps
157,811
Project Pages: Latent Consistency Models: Synthesizing High-Resolution Images with Few-step Inference

Hugging Face Demos: Latent Consistency Models - a Hugging Face Space by SimianLuo

Models: SimianLuo/LCM_Dreamshaper_v7 · Hugging Face




Latent Consistency Models: Synthesizing High-Resolution Images with Few-step Inference​

Simian Luo*, Yiqin Tan*, Longbo Huang†, Jian Li†, Hang Zhao†,

Institute for Interdisciplinary Information Sciences, Tsinghua University
arXiv Paper Code 🔥 Hugging Face 🤗 Demo Hugging Face 🤗 Model

" LCMs: The next generation of generative models after Latent Diffusion Models (LDMs). "

Abstract​

Latent Diffusion models (LDMs) have achieved remarkable results in synthesizing high-resolution images. However, the iterative sampling is computationally intensive and leads to slow generation.

Inspired by Consistency Models, we propose Latent Consistency Models (LCMs), enabling swift inference with minimal steps on any pre-trained LDMs, including Stable Diffusion. Viewing the guided reverse diffusion process as solving an augmented probability flow ODE (PF-ODE), LCMs are designed to directly predict the solution of such ODE in latent space, mitigating the need for numerous iterations and allowing rapid, high-fidelity sampling.

Efficiently distilled from pre-trained classifier-free guided diffusion models, a high-quality 768×768 2~4-step LCM takes only 32 A100 GPU hours for training. Furthermore, we introduce Latent Consistency Fine-tuning (LCF), a novel method that is tailored for fine-tuning LCMs on customized image datasets. Evaluation on the LAION-5B-Aesthetics dataset demonstrates that LCMs achieve state-of-the-art text-to-image generation performance with few-step inference.

Few-Step Generated Images​

Images generated by Latent Consistency Models (LCMs). LCMs can be distilled from any pre-trained Stable Diffusion (SD) in only 4,000 training steps (~32 A100 GPU Hours) for generating high quality 768 x 768 resolution images in 2~4 steps or even one step , significantly accelerating text-to-image generation. We employ LCM to distill the Dreamshaper-V7 version of SD in just 4,000 training iterations.
Interpolate start reference image.

More Generation Results (4-Steps)​

More generated images results with LCM 4-Step inference (768 x 768 Resolution). We employ LCM to distill the Dreamshaper-V7 version of SD in just 4,000 training iterations.
Interpolate start reference image.

More Generation Results (2-Steps)​

More generated images results with LCM 2-Step inference (768 x 768 Resolution). We employ LCM to distill the Dreamshaper-V7 version of SD in just 4,000 training iterations.
Interpolate start reference image.

Latent Consistency Fine-tuning (LCF)​

LCF is a fine-tuning method designed for pretrained LCM. LCF enables efficient few-step inference on customized datasets without teacher diffusion model, presenting a viable alternative to directly finetune a pretrained LCM.

4-step LCMs using Latent Consistency Fine-tuning (LCF) on two customized datasets: Pokemon Dataset (left), Simpsons Dataset (right). Through LCF, LCM produces images with customized styles.
Interpolate start reference image.




About​

Latent Consistency Models: Synthesizing High-Resolution Images with Few-Step Inference

Latent Consistency Models

Official Repository of the paper: Latent Consistency Models: Synthesizing High-Resolution Images with Few-Step Inference.


Project Page: https://latent-consistency-models.github.io


Try our 🤗 Hugging Face Demos: Hugging Face Spaces 🔥🔥🔥

Join our LCM discord channels for discussions. Coders are welcome to contribute.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,115
Reputation
8,239
Daps
157,811

Computer Science > Computation and Language

[Submitted on 5 Oct 2023]

DSPy: Compiling Declarative Language Model Calls into Self-Improving Pipelines

Omar Khattab, Arnav Singhvi, Paridhi Maheshwari, Zhiyuan Zhang, Keshav Santhanam, Sri Vardhamanan, Saiful Haq, Ashutosh Sharma, Thomas T. Joshi, Hanna Moazam, Heather Miller, Matei Zaharia, Christopher Potts
The ML community is rapidly exploring techniques for prompting language models (LMs) and for stacking them into pipelines that solve complex tasks. Unfortunately, existing LM pipelines are typically implemented using hard-coded "prompt templates", i.e. lengthy strings discovered via trial and error. Toward a more systematic approach for developing and optimizing LM pipelines, we introduce DSPy, a programming model that abstracts LM pipelines as text transformation graphs, i.e. imperative computational graphs where LMs are invoked through declarative modules. DSPy modules are parameterized, meaning they can learn (by creating and collecting demonstrations) how to apply compositions of prompting, finetuning, augmentation, and reasoning techniques. We design a compiler that will optimize any DSPy pipeline to maximize a given metric. We conduct two case studies, showing that succinct DSPy programs can express and optimize sophisticated LM pipelines that reason about math word problems, tackle multi-hop retrieval, answer complex questions, and control agent loops. Within minutes of compiling, a few lines of DSPy allow GPT-3.5 and llama2-13b-chat to self-bootstrap pipelines that outperform standard few-shot prompting (generally by over 25% and 65%, respectively) and pipelines with expert-created demonstrations (by up to 5-46% and 16-40%, respectively). On top of that, DSPy programs compiled to open and relatively small LMs like 770M-parameter T5 and llama2-13b-chat are competitive with approaches that rely on expert-written prompt chains for proprietary GPT-3.5. DSPy is available at this https URL
Subjects:Computation and Language (cs.CL); Artificial Intelligence (cs.AI); Information Retrieval (cs.IR); Machine Learning (cs.LG)
Cite as:arXiv:2310.03714 [cs.CL]
(or arXiv:2310.03714v1 [cs.CL] for this version)
[2310.03714] DSPy: Compiling Declarative Language Model Calls into Self-Improving Pipelines
Focus to learn more

Submission history

From: Omar Khattab [view email]
[v1] Thu, 5 Oct 2023 17:37:25 UTC (77 KB)





[CL] DSPy: Compiling Declarative Language Model Calls into Self-Improving Pipelines
O Khattab, A Singhvi, P Maheshwari, Z Zhang, K Santhanam, S Vardhamanan… [Stanford University & UC Berkeley & Amazon Alexa AI] (2023)
arxiv.org/abs/2310.03714

- Introduces DSPy, a new programming model for designing AI systems using pipelines of pretrained language models (LMs) and other tools

- DSPy contributes three main abstractions: signatures, modules, and teleprompters

- Signatures abstract the input/output behavior of a module using natural language typed declarations

- Modules replace hand-prompting techniques and can be composed into pipelines

- Teleprompters are optimizers that improve modules via prompting or finetuning

- Case studies on math word problems and multi-hop QA show DSPy programs outperform hand-crafted prompts

- With DSPy, small LMs like Llama2-13b-chat can be competitive with large proprietary LMs using expert prompts

- DSPy offers a systematic way to explore complex LM pipelines without extensive prompt engineering
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,115
Reputation
8,239
Daps
157,811

About​

Official implementation of "Separate Anything You Describe"

audio-agi.github.io/Separate-Anything-Y

Separate Anything You Describe

arXiv GitHub Stars githubio Open In Colab Hugging Face Spaces Replicate

This repository contains the official implementation of "Separate Anything You Describe".

We introduce AudioSep, a foundation model for open-domain sound separation with natural language queries. AudioSep demonstrates strong separation performance and impressive zero-shot generalization ability on numerous tasks such as audio event separation, musical instrument separation, and speech enhancement. Check the separated audio examples in the Demo Page!



DEMO:

 
Top