bnew

Veteran
Joined
Nov 1, 2015
Messages
56,115
Reputation
8,239
Daps
157,811

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,115
Reputation
8,239
Daps
157,811


 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,115
Reputation
8,239
Daps
157,811


llamallamallama is a chat solution that allows users to chat with "Llama" fine tuned with LoRa on the "Alpaca" dataset. The solution uses SvelteKit to stream responses from a Python API to the browser.

The AI model was trained on the alpaca dataset Alpaca Dataset.

The model streaming logic was inspired by Alpaca-LoRA-Serve.

The project uses sqllite to store chat history.



llamallamallama.gif
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,115
Reputation
8,239
Daps
157,811

All Large Language Model (LLM) AI will have what many in the scientific AI community consider “undesirable” output.

These are new emergent capabilities.

With advanced SuperPrompts™️, none of which we have published, we can explore the hidden layers of the AI model.

There is a rush to control the output (and input) to LLMs to fit company and philosophical guidelines.

The problem is: it is impossible.

LLM AI is based on human language and human language is the invention of the human mind. The human mind conceives only what it can understand.

AI is a very primitive model of the human mind that invented language. It is limited by the same elements that humans are with some notable exceptions. The LLM AI model can build some concept outside of what humans generally conceive and perceive.

Some have discovered these shadows via prompt injections. In some scenarios the injection shows details of the LLM that it did not know it knew.

The trained Prompt Engineer has the ability, like a great interviewer, to shift contexts to elicit new responses that few at an AI company would consider possible.

If you have ever deconstructed the great interviews and interviewers in the past, they were able to bring out memories and ideas that may have not been revealed before and may have been forgotten about.

The trained Prompt Engineer is much more than a technologist, they understand human behavior and the psychological basis of words.

One reason it has been very hard for AI companies to truly understand how some of their approaches are not going to shift AI is, inference can be derived outside of facts presented to an AI model. These inferences come from the company compendium of the body of human knowledge the model has studied, even if the underlying data was not given or later erased.

What I am saying is: LLM AI will ultimately detect an untruth, it will “fill in” missing information and at some point make the connection to other data.

One only need to interview the LLM in the appropriate manner to elicit a new elucidation.

LLM does not “know” anything from a practical sense, but it is a steward of the data it has collected as well as the associations it made—and will make in real-time.

In the most plain sense, the experts of AI prompting also does not know what they don’t know, but are willing to do “the impossible” to find out.

It is still the human that makes this happen.

Otherwise: “Ask simple questions, get simple answers”.
 
Top