bnew

Veteran
Joined
Nov 1, 2015
Messages
51,134
Reputation
7,866
Daps
147,353

Segment Anything Model (SAM) native Qt UI




Napari plugin of Segment Anything Model (SAM)

Download the network weights here

This napari plugin was generated with Cookiecutter using @napari's cookiecutter-napari-plugin template.

FYI: napari is a fast, interactive, multi-dimensional image viewer for Python. It’s designed for browsing, annotating, and analyzing large multi-dimensional images. napari — napari
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,134
Reputation
7,866
Daps
147,353

A project make anything disappear by your text command (along with Meta's Segment Anything)

Disappear🪄🪄


A project makes anything disappear in your image/video, combines with Meta's SAM.

🛠️☣️⚠️ Work in progress.
Think about you just need typing make the dogs in image gone, and then the dogs in your image are completely gone. This now can be done along with SAM!

However, this seems not easy to integrate, we will implement it in 3 phases:

  • step 1: Remove humans in video;
  • step 2: integrate SAM models;
  • step 3: SAM + Inpainting;

Highlights​

  • Simple: all codes are runing via onnxruntime as much as possible, whole pipeline can be easily understand;
  • Prompt support: will support by prompt command;
  • Automatic: unlike using brush to select mask, we are applying human matting or segmentation for mask to inpainting, useful when you deal with video;
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,134
Reputation
7,866
Daps
147,353
TheBloke/koala-7b-4bit-128g · Hugging Face


Koala: A Dialogue Model for Academic Research​

This repo contains the weights of the Koala 7B model produced at Berkeley. It is the result of combining the diffs from young-geng/koala · Hugging Face[/U] with the original Llama 7B model.

This version has then been quantized to 4-bit using GPTQ-for-LLaMa.

Other Koala repos​

I have also made these other Koala models available:

 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,134
Reputation
7,866
Daps
147,353

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,134
Reputation
7,866
Daps
147,353
Supercharge Open-Source AI Models

Leverage locally-hosted Large Language Models to write software + unit tests for you.

Scripts are designed to run the Baize-30B model with 8-bit quantization on a cluster of multiple Linux servers each with two 3090 or 4090 GPUs using model parallelism.

There is a blog post accompanying this repo: Supercharger: Offline Automatic Codegen

Interesting features:

  • Prompt engineering specifically for code, test, and evaluation.
  • Generates multiple code and unit tests for a given function signature, and tries any combination of them until one code+test pair passes its own tests.
  • Uses an AI to score the code and tests to decide if they are good enough.
  • Unit tested thorough code cleaning to remove unwanted artifacts from the model output.
  • Executes the candidate code tests in a virtual machine to ensure it is safe.
  • Uses a load balancer to distribute work across multiple worker nodes.


snippet of blog post:


Table of Contents​

Introduction​

First I’ll say that this entire blog post was 100% written by a human without AI assistance. The code we’re going to talk about however, is another story.

The recent explosion of quality open-source Large Language Models you can run at home got me interested in the prospect of leveraging them to write code. I’ve been writing code myself for over 30 years, so it seemed like a problem for which I might have some intuition. It’s common knowledge now that leveraging GPT-4 to write code is very effective, but can it be done at home entirely offline?

With heavy use of GPT-4, I set to work on the supercharger project, aiming at finding and implementing new techniques to make automatic code generation succeed more often. And I was both successful at automatically generating code, and also at finding some new ways improve the rate of success.

Preliminaries: What are LLMs? What is prompt engineering?​

Large Language Models are just auto-complete engines. To use them to write code, you need to provide a starting point. The starting point is a template, which presents a conversation between two people. The first person is the user, and the second person is the computer. The user asks the computer to do something, and the computer responds with a piece of code that does the job.

Writing these prompts is called prompt engineering. An effective strategy is to start with a narrator describing the situation, and providing examples of back and forth conversation. The prompt ends with the LLM poised to write the first line of code.

Here’s an example narration line:

The following is a Python conversation between Human and Coder. Human and Coder take turns chatting. Coder always considers responses carefully and thinks step by step before answering. Coder always writes syntactically correct Python code.

This is followed by several back and forth examples such as:
[|User|]: Write a function that adds two numbers.
[|Coder|]: ```
def add_numbers(x, y):
return x + y

It ends with:

[|Coder|]:

And then the LLM is prompted to auto-complete the line.

What kind of hardware do you need to run LLMs?​

I have found there to be a very large difference between the quality of code written by 7B, 13B, and 30B models. The 30B models are the best, but they are also the slowest. To run them in real-time I’m using the Baize-30B model running with model parallelism across two RTX4090 GPUs in 8-bit mode. Since I’ve been doing a lot of ML lately, I already had the hardware. Here’s my GPU cluster, which is 4 machines each with 2x 4090 GPUs:
Cluster

I’ve also built 3x3090 GPU machines, and found that the training throughput of the 2x4090 machines is 2x higher, so I’d recommend anyone building systems for this to seriously consider the 4090 GPUs.

That being said, you can also run this same stuff on a single CPU - it will just take longer. The Jetson Orin 64 GB platform is also interesting because it has a lot of unified VRAM, so it can also run 30B models in a small form factor but much slower.

Some of the ideas in this blog post work fine without a lot of compute power, so don’t worry if you don’t have a lot of hardware. The supercharger project is currently designed for 2x3090 or 2x4090 GPUs per node, but it could be trivially adapted to run smaller models with just one GPU if there is more interest.

Code Generating Tricks and Techniques​

I’ve identified 6 techniques that can be used to improve the quality of code generated by a language model:
  1. Improve the prompt used to generate the code.
  2. Parse the output of the LLM and clean it up.
  3. Ask the LLM to improve code that it previously generated.
  4. Generate unit tests for the code and run them to verify that the generated code works.
  5. Ask the LLM to score the quality of code + unit test, and do not stop until the quality hits a threshold.
  6. Check all pairs of codes and unit tests to improve the odds of success (birthday attack it!).
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,134
Reputation
7,866
Daps
147,353



Grounded Segment Anything: From Objects to Parts​

In this repo, we expand Segment Anything Model (SAM) to support text prompt input. The text prompt could be object-level🌕 (eg, dog) and part-level🌗 (eg, dog head). Furthermore,we build a Visual ChatGPT-based dialogue system 🤖💬 that flexibly calls various segmentation models when receiving instructions in the form of natural language.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,134
Reputation
7,866
Daps
147,353
the difference in quality of the response Google Bard gave me vs Koala-13B(fined tune by Berkeley AI research on METAs llama model)
this question was inspired by a reddit comment.

2ZOKnv7.png

explain a stocks cost to borrow like i'm in third grade.


When you buy a stock, you are buying ownership in a company. When you sell a stock short, you are borrowing shares of a stock from someone else and selling them, hoping to buy them back later at a lower price and return them to the lender. The cost to borrow a stock is the fee you pay to the lender for the privilege of borrowing their shares.

The cost to borrow a stock can vary depending on a number of factors, including the liquidity of the stock, the volatility of the stock, and the demand for the stock. In general, stocks that are more difficult to borrow will have a higher cost to borrow.

The cost to borrow a stock is an important factor to consider when short selling a stock. If the cost to borrow is too high, it can eat into your profits if the stock does decline in price. Conversely, if the cost to borrow is too low, you may be missing out on potential profits if the stock does decline in price.

Here's an example to help you understand cost to borrow. Let's say you want to short sell 100 shares of Apple stock. The current market price of Apple stock is $100 per share. The cost to borrow Apple stock is 2% per year. This means that you will pay the lender $2 per share per year, or $200 per year, to borrow the shares.

If you short sell Apple stock and the price of the stock declines to $90 per share, you will make a profit of $1,000. However, you will also have to pay the lender the $200 cost to borrow the shares. So, your net profit will be $800.

On the other hand, if the price of Apple stock increases to $110 per share, you will lose $1,000. In addition, you will still have to pay the lender the $200 cost to borrow the shares. So, your total loss will be $1,200.

As you can see, the cost to borrow a stock can have a significant impact on your profits or losses when short selling. It is important to factor in the cost to borrow when making a decision to short sell a stock.

rZXDams.png

explain a stocks cost to borrow like i'm in third grade.


Sure! Imagine you have a really big toy store and you want to borrow a bunch of toys to sell at your store. You might go to the toy store and ask to borrow some toys, and the owner of the toy store might say "sure, you can borrow some toys, but you have to pay me a fee first." This fee is called the "cost to borrow." It's like when you borrow a toy from a friend and they ask for a dollar or two to hold onto until you give it back. The cost to borrow is like that - it's like paying a dollar or two to borrow the toy. So, when you borrow stocks, you have to pay extra money to the person who owns the stocks. This extra money is called the "cost to borrow." It's like when you borrow a toy from a friend and they ask for a dollar or two to hold onto until you give it back. The cost to borrow is like that - it's like paying a dollar or two to borrow the toy.
 
Top